00:00:00.000 Started by upstream project "autotest-per-patch" build number 132743 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.217 The recommended git tool is: git 00:00:00.218 using credential 00000000-0000-0000-0000-000000000002 00:00:00.221 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.234 Fetching changes from the remote Git repository 00:00:00.239 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.255 Using shallow fetch with depth 1 00:00:00.255 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.255 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.812 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.826 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.842 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.843 > git config core.sparsecheckout # timeout=10 00:00:05.859 > git read-tree -mu HEAD # timeout=10 00:00:05.875 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.903 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.904 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.014 [Pipeline] Start of Pipeline 00:00:06.028 [Pipeline] library 00:00:06.030 Loading library shm_lib@master 00:00:06.030 Library shm_lib@master is cached. Copying from home. 00:00:06.047 [Pipeline] node 00:00:06.056 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.057 [Pipeline] { 00:00:06.066 [Pipeline] catchError 00:00:06.067 [Pipeline] { 00:00:06.077 [Pipeline] wrap 00:00:06.085 [Pipeline] { 00:00:06.092 [Pipeline] stage 00:00:06.094 [Pipeline] { (Prologue) 00:00:06.290 [Pipeline] sh 00:00:06.575 + logger -p user.info -t JENKINS-CI 00:00:06.631 [Pipeline] echo 00:00:06.633 Node: CYP9 00:00:06.642 [Pipeline] sh 00:00:06.957 [Pipeline] setCustomBuildProperty 00:00:06.968 [Pipeline] echo 00:00:06.968 Cleanup processes 00:00:06.972 [Pipeline] sh 00:00:07.256 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.256 1384632 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.268 [Pipeline] sh 00:00:07.557 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.557 ++ grep -v 'sudo pgrep' 00:00:07.557 ++ awk '{print $1}' 00:00:07.557 + sudo kill -9 00:00:07.557 + true 00:00:07.572 [Pipeline] cleanWs 00:00:07.583 [WS-CLEANUP] Deleting project workspace... 00:00:07.583 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.590 [WS-CLEANUP] done 00:00:07.595 [Pipeline] setCustomBuildProperty 00:00:07.607 [Pipeline] sh 00:00:07.919 + sudo git config --global --replace-all safe.directory '*' 00:00:08.037 [Pipeline] httpRequest 00:00:08.699 [Pipeline] echo 00:00:08.701 Sorcerer 10.211.164.101 is alive 00:00:08.708 [Pipeline] retry 00:00:08.709 [Pipeline] { 00:00:08.719 [Pipeline] httpRequest 00:00:08.723 HttpMethod: GET 00:00:08.723 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.724 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.752 Response Code: HTTP/1.1 200 OK 00:00:08.752 Success: Status code 200 is in the accepted range: 200,404 00:00:08.752 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.815 [Pipeline] } 00:00:25.833 [Pipeline] // retry 00:00:25.839 [Pipeline] sh 00:00:26.127 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.141 [Pipeline] httpRequest 00:00:26.532 [Pipeline] echo 00:00:26.533 Sorcerer 10.211.164.101 is alive 00:00:26.543 [Pipeline] retry 00:00:26.545 [Pipeline] { 00:00:26.559 [Pipeline] httpRequest 00:00:26.564 HttpMethod: GET 00:00:26.564 URL: http://10.211.164.101/packages/spdk_99034762d6d40de648cabcb12745d7e7f8583339.tar.gz 00:00:26.565 Sending request to url: http://10.211.164.101/packages/spdk_99034762d6d40de648cabcb12745d7e7f8583339.tar.gz 00:00:26.574 Response Code: HTTP/1.1 200 OK 00:00:26.575 Success: Status code 200 is in the accepted range: 200,404 00:00:26.575 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_99034762d6d40de648cabcb12745d7e7f8583339.tar.gz 00:03:25.128 [Pipeline] } 00:03:25.147 [Pipeline] // retry 00:03:25.154 [Pipeline] sh 00:03:25.447 + tar --no-same-owner -xf spdk_99034762d6d40de648cabcb12745d7e7f8583339.tar.gz 00:03:28.770 [Pipeline] sh 00:03:29.063 + git -C spdk log --oneline -n5 00:03:29.063 99034762d nvmf: Clean unassociated_qpairs on connect error 00:03:29.063 269888dd3 nvmf/rdma: Fix destroy of uninitialized qpair 00:03:29.063 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:03:29.063 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:03:29.063 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:03:29.077 [Pipeline] } 00:03:29.093 [Pipeline] // stage 00:03:29.102 [Pipeline] stage 00:03:29.105 [Pipeline] { (Prepare) 00:03:29.122 [Pipeline] writeFile 00:03:29.139 [Pipeline] sh 00:03:29.430 + logger -p user.info -t JENKINS-CI 00:03:29.445 [Pipeline] sh 00:03:29.736 + logger -p user.info -t JENKINS-CI 00:03:29.751 [Pipeline] sh 00:03:30.042 + cat autorun-spdk.conf 00:03:30.042 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:30.042 SPDK_TEST_NVMF=1 00:03:30.042 SPDK_TEST_NVME_CLI=1 00:03:30.042 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:30.042 SPDK_TEST_NVMF_NICS=e810 00:03:30.042 SPDK_TEST_VFIOUSER=1 00:03:30.042 SPDK_RUN_UBSAN=1 00:03:30.042 NET_TYPE=phy 00:03:30.051 RUN_NIGHTLY=0 00:03:30.056 [Pipeline] readFile 00:03:30.085 [Pipeline] withEnv 00:03:30.088 [Pipeline] { 00:03:30.101 [Pipeline] sh 00:03:30.392 + set -ex 00:03:30.392 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:30.392 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:30.392 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:30.392 ++ SPDK_TEST_NVMF=1 00:03:30.392 ++ SPDK_TEST_NVME_CLI=1 00:03:30.392 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:30.392 ++ SPDK_TEST_NVMF_NICS=e810 00:03:30.392 ++ SPDK_TEST_VFIOUSER=1 00:03:30.392 ++ SPDK_RUN_UBSAN=1 00:03:30.392 ++ NET_TYPE=phy 00:03:30.392 ++ RUN_NIGHTLY=0 00:03:30.392 + case $SPDK_TEST_NVMF_NICS in 00:03:30.392 + DRIVERS=ice 00:03:30.392 + [[ tcp == \r\d\m\a ]] 00:03:30.392 + [[ -n ice ]] 00:03:30.392 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:30.392 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:30.392 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:30.392 rmmod: ERROR: Module irdma is not currently loaded 00:03:30.392 rmmod: ERROR: Module i40iw is not currently loaded 00:03:30.392 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:30.392 + true 00:03:30.392 + for D in $DRIVERS 00:03:30.392 + sudo modprobe ice 00:03:30.392 + exit 0 00:03:30.403 [Pipeline] } 00:03:30.419 [Pipeline] // withEnv 00:03:30.425 [Pipeline] } 00:03:30.443 [Pipeline] // stage 00:03:30.455 [Pipeline] catchError 00:03:30.457 [Pipeline] { 00:03:30.472 [Pipeline] timeout 00:03:30.472 Timeout set to expire in 1 hr 0 min 00:03:30.474 [Pipeline] { 00:03:30.489 [Pipeline] stage 00:03:30.491 [Pipeline] { (Tests) 00:03:30.508 [Pipeline] sh 00:03:30.800 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.800 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.800 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.800 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:30.801 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:30.801 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:30.801 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:30.801 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:30.801 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:30.801 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:30.801 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:30.801 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.801 + source /etc/os-release 00:03:30.801 ++ NAME='Fedora Linux' 00:03:30.801 ++ VERSION='39 (Cloud Edition)' 00:03:30.801 ++ ID=fedora 00:03:30.801 ++ VERSION_ID=39 00:03:30.801 ++ VERSION_CODENAME= 00:03:30.801 ++ PLATFORM_ID=platform:f39 00:03:30.801 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:30.801 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:30.801 ++ LOGO=fedora-logo-icon 00:03:30.801 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:30.801 ++ HOME_URL=https://fedoraproject.org/ 00:03:30.801 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:30.801 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:30.801 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:30.801 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:30.801 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:30.801 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:30.801 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:30.801 ++ SUPPORT_END=2024-11-12 00:03:30.801 ++ VARIANT='Cloud Edition' 00:03:30.801 ++ VARIANT_ID=cloud 00:03:30.801 + uname -a 00:03:30.801 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:30.801 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:34.107 Hugepages 00:03:34.107 node hugesize free / total 00:03:34.107 node0 1048576kB 0 / 0 00:03:34.107 node0 2048kB 0 / 0 00:03:34.107 node1 1048576kB 0 / 0 00:03:34.107 node1 2048kB 0 / 0 00:03:34.107 00:03:34.107 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.107 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:34.107 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:34.107 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:34.107 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:34.107 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:34.107 + rm -f /tmp/spdk-ld-path 00:03:34.107 + source autorun-spdk.conf 00:03:34.107 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:34.107 ++ SPDK_TEST_NVMF=1 00:03:34.107 ++ SPDK_TEST_NVME_CLI=1 00:03:34.107 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:34.107 ++ SPDK_TEST_NVMF_NICS=e810 00:03:34.107 ++ SPDK_TEST_VFIOUSER=1 00:03:34.107 ++ SPDK_RUN_UBSAN=1 00:03:34.107 ++ NET_TYPE=phy 00:03:34.107 ++ RUN_NIGHTLY=0 00:03:34.107 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:34.107 + [[ -n '' ]] 00:03:34.107 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:34.107 + for M in /var/spdk/build-*-manifest.txt 00:03:34.107 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:34.107 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:34.107 + for M in /var/spdk/build-*-manifest.txt 00:03:34.107 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:34.107 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:34.107 + for M in /var/spdk/build-*-manifest.txt 00:03:34.107 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:34.107 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:34.107 ++ uname 00:03:34.107 + [[ Linux == \L\i\n\u\x ]] 00:03:34.107 + sudo dmesg -T 00:03:34.107 + sudo dmesg --clear 00:03:34.107 + dmesg_pid=1386197 00:03:34.107 + [[ Fedora Linux == FreeBSD ]] 00:03:34.107 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:34.107 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:34.107 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:34.107 + [[ -x /usr/src/fio-static/fio ]] 00:03:34.107 + export FIO_BIN=/usr/src/fio-static/fio 00:03:34.107 + FIO_BIN=/usr/src/fio-static/fio 00:03:34.107 + sudo dmesg -Tw 00:03:34.107 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:34.107 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:34.107 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:34.107 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:34.107 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:34.107 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:34.107 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:34.107 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:34.107 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:34.107 17:19:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:34.107 17:19:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:34.107 17:19:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:34.107 17:19:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:34.107 17:19:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:34.369 17:19:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:34.370 17:19:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:34.370 17:19:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:34.370 17:19:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:34.370 17:19:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.370 17:19:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.370 17:19:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.370 17:19:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.370 17:19:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.370 17:19:26 -- paths/export.sh@5 -- $ export PATH 00:03:34.370 17:19:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.370 17:19:26 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:34.370 17:19:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:34.370 17:19:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733501966.XXXXXX 00:03:34.370 17:19:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733501966.CikXNW 00:03:34.370 17:19:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:34.370 17:19:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:34.370 17:19:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:34.370 17:19:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:34.370 17:19:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:34.370 17:19:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:34.370 17:19:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:34.370 17:19:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:34.370 17:19:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:34.370 17:19:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:34.370 17:19:26 -- pm/common@17 -- $ local monitor 00:03:34.370 17:19:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.370 17:19:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.370 17:19:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.370 17:19:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.370 17:19:26 -- pm/common@21 -- $ date +%s 00:03:34.370 17:19:26 -- pm/common@21 -- $ date +%s 00:03:34.370 17:19:26 -- pm/common@25 -- $ sleep 1 00:03:34.370 17:19:26 -- pm/common@21 -- $ date +%s 00:03:34.370 17:19:26 -- pm/common@21 -- $ date +%s 00:03:34.370 17:19:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733501966 00:03:34.370 17:19:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733501966 00:03:34.370 17:19:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733501966 00:03:34.370 17:19:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733501966 00:03:34.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733501966_collect-cpu-load.pm.log 00:03:34.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733501966_collect-vmstat.pm.log 00:03:34.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733501966_collect-cpu-temp.pm.log 00:03:34.370 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733501966_collect-bmc-pm.bmc.pm.log 00:03:35.314 17:19:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:35.314 17:19:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:35.314 17:19:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:35.314 17:19:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.314 17:19:27 -- spdk/autobuild.sh@16 -- $ date -u 00:03:35.314 Fri Dec 6 04:19:27 PM UTC 2024 00:03:35.314 17:19:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:35.314 v25.01-pre-305-g99034762d 00:03:35.314 17:19:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:35.314 17:19:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:35.314 17:19:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:35.314 17:19:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:35.314 17:19:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:35.314 17:19:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.314 ************************************ 00:03:35.314 START TEST ubsan 00:03:35.314 ************************************ 00:03:35.314 17:19:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:35.314 using ubsan 00:03:35.314 00:03:35.314 real 0m0.001s 00:03:35.314 user 0m0.001s 00:03:35.314 sys 0m0.000s 00:03:35.314 17:19:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:35.314 17:19:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:35.314 ************************************ 00:03:35.314 END TEST ubsan 00:03:35.314 ************************************ 00:03:35.576 17:19:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:35.576 17:19:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:35.576 17:19:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:35.576 17:19:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:35.576 17:19:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:35.576 17:19:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:35.576 17:19:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:35.576 17:19:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:35.576 17:19:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:35.576 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:35.576 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:36.149 Using 'verbs' RDMA provider 00:03:51.672 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:03.902 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:04.734 Creating mk/config.mk...done. 00:04:04.734 Creating mk/cc.flags.mk...done. 00:04:04.734 Type 'make' to build. 00:04:04.734 17:19:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:04.734 17:19:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:04.734 17:19:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:04.734 17:19:56 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.734 ************************************ 00:04:04.734 START TEST make 00:04:04.734 ************************************ 00:04:04.734 17:19:56 make -- common/autotest_common.sh@1129 -- $ make -j144 00:04:04.995 make[1]: Nothing to be done for 'all'. 00:04:06.917 The Meson build system 00:04:06.917 Version: 1.5.0 00:04:06.917 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:06.917 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:06.917 Build type: native build 00:04:06.917 Project name: libvfio-user 00:04:06.917 Project version: 0.0.1 00:04:06.917 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:06.917 C linker for the host machine: cc ld.bfd 2.40-14 00:04:06.917 Host machine cpu family: x86_64 00:04:06.917 Host machine cpu: x86_64 00:04:06.917 Run-time dependency threads found: YES 00:04:06.917 Library dl found: YES 00:04:06.917 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:06.917 Run-time dependency json-c found: YES 0.17 00:04:06.917 Run-time dependency cmocka found: YES 1.1.7 00:04:06.917 Program pytest-3 found: NO 00:04:06.917 Program flake8 found: NO 00:04:06.917 Program misspell-fixer found: NO 00:04:06.917 Program restructuredtext-lint found: NO 00:04:06.917 Program valgrind found: YES (/usr/bin/valgrind) 00:04:06.917 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:06.917 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:06.917 Compiler for C supports arguments -Wwrite-strings: YES 00:04:06.917 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:06.917 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:06.917 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:06.917 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:06.917 Build targets in project: 8 00:04:06.917 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:06.917 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:06.917 00:04:06.917 libvfio-user 0.0.1 00:04:06.917 00:04:06.917 User defined options 00:04:06.917 buildtype : debug 00:04:06.917 default_library: shared 00:04:06.917 libdir : /usr/local/lib 00:04:06.917 00:04:06.917 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:06.917 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:07.179 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:07.179 [2/37] Compiling C object samples/null.p/null.c.o 00:04:07.179 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:07.179 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:07.179 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:07.179 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:07.179 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:07.179 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:07.179 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:07.179 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:07.179 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:07.179 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:07.179 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:07.179 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:07.179 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:07.179 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:07.179 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:07.179 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:07.179 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:07.179 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:07.179 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:07.179 [22/37] Compiling C object samples/server.p/server.c.o 00:04:07.179 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:07.179 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:07.179 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:07.179 [26/37] Compiling C object samples/client.p/client.c.o 00:04:07.179 [27/37] Linking target samples/client 00:04:07.179 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:07.441 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:07.441 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:04:07.441 [31/37] Linking target test/unit_tests 00:04:07.441 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:07.441 [33/37] Linking target samples/null 00:04:07.441 [34/37] Linking target samples/server 00:04:07.441 [35/37] Linking target samples/gpio-pci-idio-16 00:04:07.441 [36/37] Linking target samples/lspci 00:04:07.441 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:07.703 INFO: autodetecting backend as ninja 00:04:07.703 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:07.703 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:07.965 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:07.965 ninja: no work to do. 00:04:14.558 The Meson build system 00:04:14.558 Version: 1.5.0 00:04:14.558 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:14.558 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:14.558 Build type: native build 00:04:14.558 Program cat found: YES (/usr/bin/cat) 00:04:14.558 Project name: DPDK 00:04:14.559 Project version: 24.03.0 00:04:14.559 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:14.559 C linker for the host machine: cc ld.bfd 2.40-14 00:04:14.559 Host machine cpu family: x86_64 00:04:14.559 Host machine cpu: x86_64 00:04:14.559 Message: ## Building in Developer Mode ## 00:04:14.559 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:14.559 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:14.559 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:14.559 Program python3 found: YES (/usr/bin/python3) 00:04:14.559 Program cat found: YES (/usr/bin/cat) 00:04:14.559 Compiler for C supports arguments -march=native: YES 00:04:14.559 Checking for size of "void *" : 8 00:04:14.559 Checking for size of "void *" : 8 (cached) 00:04:14.559 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:14.559 Library m found: YES 00:04:14.559 Library numa found: YES 00:04:14.559 Has header "numaif.h" : YES 00:04:14.559 Library fdt found: NO 00:04:14.559 Library execinfo found: NO 00:04:14.559 Has header "execinfo.h" : YES 00:04:14.559 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:14.559 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:14.559 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:14.559 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:14.559 Run-time dependency openssl found: YES 3.1.1 00:04:14.559 Run-time dependency libpcap found: YES 1.10.4 00:04:14.559 Has header "pcap.h" with dependency libpcap: YES 00:04:14.559 Compiler for C supports arguments -Wcast-qual: YES 00:04:14.559 Compiler for C supports arguments -Wdeprecated: YES 00:04:14.559 Compiler for C supports arguments -Wformat: YES 00:04:14.559 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:14.559 Compiler for C supports arguments -Wformat-security: NO 00:04:14.559 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:14.559 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:14.559 Compiler for C supports arguments -Wnested-externs: YES 00:04:14.559 Compiler for C supports arguments -Wold-style-definition: YES 00:04:14.559 Compiler for C supports arguments -Wpointer-arith: YES 00:04:14.559 Compiler for C supports arguments -Wsign-compare: YES 00:04:14.559 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:14.559 Compiler for C supports arguments -Wundef: YES 00:04:14.559 Compiler for C supports arguments -Wwrite-strings: YES 00:04:14.559 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:14.559 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:14.559 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:14.559 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:14.559 Program objdump found: YES (/usr/bin/objdump) 00:04:14.559 Compiler for C supports arguments -mavx512f: YES 00:04:14.559 Checking if "AVX512 checking" compiles: YES 00:04:14.559 Fetching value of define "__SSE4_2__" : 1 00:04:14.559 Fetching value of define "__AES__" : 1 00:04:14.559 Fetching value of define "__AVX__" : 1 00:04:14.559 Fetching value of define "__AVX2__" : 1 00:04:14.559 Fetching value of define "__AVX512BW__" : 1 00:04:14.559 Fetching value of define "__AVX512CD__" : 1 00:04:14.559 Fetching value of define "__AVX512DQ__" : 1 00:04:14.559 Fetching value of define "__AVX512F__" : 1 00:04:14.559 Fetching value of define "__AVX512VL__" : 1 00:04:14.559 Fetching value of define "__PCLMUL__" : 1 00:04:14.559 Fetching value of define "__RDRND__" : 1 00:04:14.559 Fetching value of define "__RDSEED__" : 1 00:04:14.559 Fetching value of define "__VPCLMULQDQ__" : 1 00:04:14.559 Fetching value of define "__znver1__" : (undefined) 00:04:14.559 Fetching value of define "__znver2__" : (undefined) 00:04:14.559 Fetching value of define "__znver3__" : (undefined) 00:04:14.559 Fetching value of define "__znver4__" : (undefined) 00:04:14.559 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:14.559 Message: lib/log: Defining dependency "log" 00:04:14.559 Message: lib/kvargs: Defining dependency "kvargs" 00:04:14.559 Message: lib/telemetry: Defining dependency "telemetry" 00:04:14.559 Checking for function "getentropy" : NO 00:04:14.559 Message: lib/eal: Defining dependency "eal" 00:04:14.559 Message: lib/ring: Defining dependency "ring" 00:04:14.559 Message: lib/rcu: Defining dependency "rcu" 00:04:14.559 Message: lib/mempool: Defining dependency "mempool" 00:04:14.559 Message: lib/mbuf: Defining dependency "mbuf" 00:04:14.559 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:14.559 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:14.559 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:14.559 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:14.559 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:14.559 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:04:14.559 Compiler for C supports arguments -mpclmul: YES 00:04:14.559 Compiler for C supports arguments -maes: YES 00:04:14.559 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:14.559 Compiler for C supports arguments -mavx512bw: YES 00:04:14.559 Compiler for C supports arguments -mavx512dq: YES 00:04:14.559 Compiler for C supports arguments -mavx512vl: YES 00:04:14.559 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:14.559 Compiler for C supports arguments -mavx2: YES 00:04:14.559 Compiler for C supports arguments -mavx: YES 00:04:14.559 Message: lib/net: Defining dependency "net" 00:04:14.559 Message: lib/meter: Defining dependency "meter" 00:04:14.559 Message: lib/ethdev: Defining dependency "ethdev" 00:04:14.559 Message: lib/pci: Defining dependency "pci" 00:04:14.559 Message: lib/cmdline: Defining dependency "cmdline" 00:04:14.559 Message: lib/hash: Defining dependency "hash" 00:04:14.559 Message: lib/timer: Defining dependency "timer" 00:04:14.559 Message: lib/compressdev: Defining dependency "compressdev" 00:04:14.559 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:14.559 Message: lib/dmadev: Defining dependency "dmadev" 00:04:14.559 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:14.559 Message: lib/power: Defining dependency "power" 00:04:14.559 Message: lib/reorder: Defining dependency "reorder" 00:04:14.559 Message: lib/security: Defining dependency "security" 00:04:14.559 Has header "linux/userfaultfd.h" : YES 00:04:14.559 Has header "linux/vduse.h" : YES 00:04:14.559 Message: lib/vhost: Defining dependency "vhost" 00:04:14.559 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:14.559 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:14.559 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:14.559 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:14.559 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:14.559 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:14.559 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:14.559 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:14.559 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:14.559 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:14.559 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:14.559 Configuring doxy-api-html.conf using configuration 00:04:14.559 Configuring doxy-api-man.conf using configuration 00:04:14.559 Program mandb found: YES (/usr/bin/mandb) 00:04:14.559 Program sphinx-build found: NO 00:04:14.559 Configuring rte_build_config.h using configuration 00:04:14.559 Message: 00:04:14.559 ================= 00:04:14.559 Applications Enabled 00:04:14.559 ================= 00:04:14.559 00:04:14.559 apps: 00:04:14.559 00:04:14.559 00:04:14.559 Message: 00:04:14.559 ================= 00:04:14.559 Libraries Enabled 00:04:14.559 ================= 00:04:14.559 00:04:14.559 libs: 00:04:14.559 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:14.559 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:14.559 cryptodev, dmadev, power, reorder, security, vhost, 00:04:14.559 00:04:14.559 Message: 00:04:14.559 =============== 00:04:14.559 Drivers Enabled 00:04:14.559 =============== 00:04:14.559 00:04:14.559 common: 00:04:14.559 00:04:14.559 bus: 00:04:14.559 pci, vdev, 00:04:14.559 mempool: 00:04:14.559 ring, 00:04:14.559 dma: 00:04:14.559 00:04:14.559 net: 00:04:14.559 00:04:14.559 crypto: 00:04:14.559 00:04:14.559 compress: 00:04:14.559 00:04:14.559 vdpa: 00:04:14.559 00:04:14.559 00:04:14.559 Message: 00:04:14.559 ================= 00:04:14.559 Content Skipped 00:04:14.559 ================= 00:04:14.559 00:04:14.559 apps: 00:04:14.559 dumpcap: explicitly disabled via build config 00:04:14.559 graph: explicitly disabled via build config 00:04:14.559 pdump: explicitly disabled via build config 00:04:14.559 proc-info: explicitly disabled via build config 00:04:14.559 test-acl: explicitly disabled via build config 00:04:14.559 test-bbdev: explicitly disabled via build config 00:04:14.559 test-cmdline: explicitly disabled via build config 00:04:14.559 test-compress-perf: explicitly disabled via build config 00:04:14.559 test-crypto-perf: explicitly disabled via build config 00:04:14.559 test-dma-perf: explicitly disabled via build config 00:04:14.559 test-eventdev: explicitly disabled via build config 00:04:14.559 test-fib: explicitly disabled via build config 00:04:14.559 test-flow-perf: explicitly disabled via build config 00:04:14.559 test-gpudev: explicitly disabled via build config 00:04:14.559 test-mldev: explicitly disabled via build config 00:04:14.559 test-pipeline: explicitly disabled via build config 00:04:14.559 test-pmd: explicitly disabled via build config 00:04:14.559 test-regex: explicitly disabled via build config 00:04:14.559 test-sad: explicitly disabled via build config 00:04:14.559 test-security-perf: explicitly disabled via build config 00:04:14.559 00:04:14.559 libs: 00:04:14.559 argparse: explicitly disabled via build config 00:04:14.560 metrics: explicitly disabled via build config 00:04:14.560 acl: explicitly disabled via build config 00:04:14.560 bbdev: explicitly disabled via build config 00:04:14.560 bitratestats: explicitly disabled via build config 00:04:14.560 bpf: explicitly disabled via build config 00:04:14.560 cfgfile: explicitly disabled via build config 00:04:14.560 distributor: explicitly disabled via build config 00:04:14.560 efd: explicitly disabled via build config 00:04:14.560 eventdev: explicitly disabled via build config 00:04:14.560 dispatcher: explicitly disabled via build config 00:04:14.560 gpudev: explicitly disabled via build config 00:04:14.560 gro: explicitly disabled via build config 00:04:14.560 gso: explicitly disabled via build config 00:04:14.560 ip_frag: explicitly disabled via build config 00:04:14.560 jobstats: explicitly disabled via build config 00:04:14.560 latencystats: explicitly disabled via build config 00:04:14.560 lpm: explicitly disabled via build config 00:04:14.560 member: explicitly disabled via build config 00:04:14.560 pcapng: explicitly disabled via build config 00:04:14.560 rawdev: explicitly disabled via build config 00:04:14.560 regexdev: explicitly disabled via build config 00:04:14.560 mldev: explicitly disabled via build config 00:04:14.560 rib: explicitly disabled via build config 00:04:14.560 sched: explicitly disabled via build config 00:04:14.560 stack: explicitly disabled via build config 00:04:14.560 ipsec: explicitly disabled via build config 00:04:14.560 pdcp: explicitly disabled via build config 00:04:14.560 fib: explicitly disabled via build config 00:04:14.560 port: explicitly disabled via build config 00:04:14.560 pdump: explicitly disabled via build config 00:04:14.560 table: explicitly disabled via build config 00:04:14.560 pipeline: explicitly disabled via build config 00:04:14.560 graph: explicitly disabled via build config 00:04:14.560 node: explicitly disabled via build config 00:04:14.560 00:04:14.560 drivers: 00:04:14.560 common/cpt: not in enabled drivers build config 00:04:14.560 common/dpaax: not in enabled drivers build config 00:04:14.560 common/iavf: not in enabled drivers build config 00:04:14.560 common/idpf: not in enabled drivers build config 00:04:14.560 common/ionic: not in enabled drivers build config 00:04:14.560 common/mvep: not in enabled drivers build config 00:04:14.560 common/octeontx: not in enabled drivers build config 00:04:14.560 bus/auxiliary: not in enabled drivers build config 00:04:14.560 bus/cdx: not in enabled drivers build config 00:04:14.560 bus/dpaa: not in enabled drivers build config 00:04:14.560 bus/fslmc: not in enabled drivers build config 00:04:14.560 bus/ifpga: not in enabled drivers build config 00:04:14.560 bus/platform: not in enabled drivers build config 00:04:14.560 bus/uacce: not in enabled drivers build config 00:04:14.560 bus/vmbus: not in enabled drivers build config 00:04:14.560 common/cnxk: not in enabled drivers build config 00:04:14.560 common/mlx5: not in enabled drivers build config 00:04:14.560 common/nfp: not in enabled drivers build config 00:04:14.560 common/nitrox: not in enabled drivers build config 00:04:14.560 common/qat: not in enabled drivers build config 00:04:14.560 common/sfc_efx: not in enabled drivers build config 00:04:14.560 mempool/bucket: not in enabled drivers build config 00:04:14.560 mempool/cnxk: not in enabled drivers build config 00:04:14.560 mempool/dpaa: not in enabled drivers build config 00:04:14.560 mempool/dpaa2: not in enabled drivers build config 00:04:14.560 mempool/octeontx: not in enabled drivers build config 00:04:14.560 mempool/stack: not in enabled drivers build config 00:04:14.560 dma/cnxk: not in enabled drivers build config 00:04:14.560 dma/dpaa: not in enabled drivers build config 00:04:14.560 dma/dpaa2: not in enabled drivers build config 00:04:14.560 dma/hisilicon: not in enabled drivers build config 00:04:14.560 dma/idxd: not in enabled drivers build config 00:04:14.560 dma/ioat: not in enabled drivers build config 00:04:14.560 dma/skeleton: not in enabled drivers build config 00:04:14.560 net/af_packet: not in enabled drivers build config 00:04:14.560 net/af_xdp: not in enabled drivers build config 00:04:14.560 net/ark: not in enabled drivers build config 00:04:14.560 net/atlantic: not in enabled drivers build config 00:04:14.560 net/avp: not in enabled drivers build config 00:04:14.560 net/axgbe: not in enabled drivers build config 00:04:14.560 net/bnx2x: not in enabled drivers build config 00:04:14.560 net/bnxt: not in enabled drivers build config 00:04:14.560 net/bonding: not in enabled drivers build config 00:04:14.560 net/cnxk: not in enabled drivers build config 00:04:14.560 net/cpfl: not in enabled drivers build config 00:04:14.560 net/cxgbe: not in enabled drivers build config 00:04:14.560 net/dpaa: not in enabled drivers build config 00:04:14.560 net/dpaa2: not in enabled drivers build config 00:04:14.560 net/e1000: not in enabled drivers build config 00:04:14.560 net/ena: not in enabled drivers build config 00:04:14.560 net/enetc: not in enabled drivers build config 00:04:14.560 net/enetfec: not in enabled drivers build config 00:04:14.560 net/enic: not in enabled drivers build config 00:04:14.560 net/failsafe: not in enabled drivers build config 00:04:14.560 net/fm10k: not in enabled drivers build config 00:04:14.560 net/gve: not in enabled drivers build config 00:04:14.560 net/hinic: not in enabled drivers build config 00:04:14.560 net/hns3: not in enabled drivers build config 00:04:14.560 net/i40e: not in enabled drivers build config 00:04:14.560 net/iavf: not in enabled drivers build config 00:04:14.560 net/ice: not in enabled drivers build config 00:04:14.560 net/idpf: not in enabled drivers build config 00:04:14.560 net/igc: not in enabled drivers build config 00:04:14.560 net/ionic: not in enabled drivers build config 00:04:14.560 net/ipn3ke: not in enabled drivers build config 00:04:14.560 net/ixgbe: not in enabled drivers build config 00:04:14.560 net/mana: not in enabled drivers build config 00:04:14.560 net/memif: not in enabled drivers build config 00:04:14.560 net/mlx4: not in enabled drivers build config 00:04:14.560 net/mlx5: not in enabled drivers build config 00:04:14.560 net/mvneta: not in enabled drivers build config 00:04:14.560 net/mvpp2: not in enabled drivers build config 00:04:14.560 net/netvsc: not in enabled drivers build config 00:04:14.560 net/nfb: not in enabled drivers build config 00:04:14.560 net/nfp: not in enabled drivers build config 00:04:14.560 net/ngbe: not in enabled drivers build config 00:04:14.560 net/null: not in enabled drivers build config 00:04:14.560 net/octeontx: not in enabled drivers build config 00:04:14.560 net/octeon_ep: not in enabled drivers build config 00:04:14.560 net/pcap: not in enabled drivers build config 00:04:14.560 net/pfe: not in enabled drivers build config 00:04:14.560 net/qede: not in enabled drivers build config 00:04:14.560 net/ring: not in enabled drivers build config 00:04:14.560 net/sfc: not in enabled drivers build config 00:04:14.560 net/softnic: not in enabled drivers build config 00:04:14.560 net/tap: not in enabled drivers build config 00:04:14.560 net/thunderx: not in enabled drivers build config 00:04:14.560 net/txgbe: not in enabled drivers build config 00:04:14.560 net/vdev_netvsc: not in enabled drivers build config 00:04:14.560 net/vhost: not in enabled drivers build config 00:04:14.560 net/virtio: not in enabled drivers build config 00:04:14.560 net/vmxnet3: not in enabled drivers build config 00:04:14.560 raw/*: missing internal dependency, "rawdev" 00:04:14.560 crypto/armv8: not in enabled drivers build config 00:04:14.560 crypto/bcmfs: not in enabled drivers build config 00:04:14.560 crypto/caam_jr: not in enabled drivers build config 00:04:14.560 crypto/ccp: not in enabled drivers build config 00:04:14.560 crypto/cnxk: not in enabled drivers build config 00:04:14.560 crypto/dpaa_sec: not in enabled drivers build config 00:04:14.560 crypto/dpaa2_sec: not in enabled drivers build config 00:04:14.560 crypto/ipsec_mb: not in enabled drivers build config 00:04:14.560 crypto/mlx5: not in enabled drivers build config 00:04:14.560 crypto/mvsam: not in enabled drivers build config 00:04:14.560 crypto/nitrox: not in enabled drivers build config 00:04:14.560 crypto/null: not in enabled drivers build config 00:04:14.560 crypto/octeontx: not in enabled drivers build config 00:04:14.560 crypto/openssl: not in enabled drivers build config 00:04:14.560 crypto/scheduler: not in enabled drivers build config 00:04:14.560 crypto/uadk: not in enabled drivers build config 00:04:14.560 crypto/virtio: not in enabled drivers build config 00:04:14.560 compress/isal: not in enabled drivers build config 00:04:14.560 compress/mlx5: not in enabled drivers build config 00:04:14.560 compress/nitrox: not in enabled drivers build config 00:04:14.560 compress/octeontx: not in enabled drivers build config 00:04:14.560 compress/zlib: not in enabled drivers build config 00:04:14.560 regex/*: missing internal dependency, "regexdev" 00:04:14.560 ml/*: missing internal dependency, "mldev" 00:04:14.560 vdpa/ifc: not in enabled drivers build config 00:04:14.560 vdpa/mlx5: not in enabled drivers build config 00:04:14.560 vdpa/nfp: not in enabled drivers build config 00:04:14.560 vdpa/sfc: not in enabled drivers build config 00:04:14.560 event/*: missing internal dependency, "eventdev" 00:04:14.560 baseband/*: missing internal dependency, "bbdev" 00:04:14.560 gpu/*: missing internal dependency, "gpudev" 00:04:14.560 00:04:14.560 00:04:14.560 Build targets in project: 84 00:04:14.560 00:04:14.560 DPDK 24.03.0 00:04:14.560 00:04:14.560 User defined options 00:04:14.560 buildtype : debug 00:04:14.560 default_library : shared 00:04:14.560 libdir : lib 00:04:14.560 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:14.560 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:14.560 c_link_args : 00:04:14.560 cpu_instruction_set: native 00:04:14.560 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:04:14.561 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:04:14.561 enable_docs : false 00:04:14.561 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:14.561 enable_kmods : false 00:04:14.561 max_lcores : 128 00:04:14.561 tests : false 00:04:14.561 00:04:14.561 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:14.561 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:14.561 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:14.561 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:14.561 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:14.561 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:14.561 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:14.561 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:14.561 [7/267] Linking static target lib/librte_kvargs.a 00:04:14.561 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:14.561 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:14.561 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:14.561 [11/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:14.561 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:14.561 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:14.561 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:14.561 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:14.561 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:14.561 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:14.821 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:14.821 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:14.821 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:14.821 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:14.821 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:14.821 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:14.821 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:14.821 [25/267] Linking static target lib/librte_log.a 00:04:14.821 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:14.821 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:14.821 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:14.821 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:14.821 [30/267] Linking static target lib/librte_pci.a 00:04:14.821 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:14.821 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:14.821 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:14.821 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:14.821 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:14.821 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:14.821 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:14.821 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:15.080 [39/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:15.080 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:15.080 [41/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:15.080 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.080 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:15.080 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:15.080 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.080 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:15.080 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:15.080 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:15.080 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:15.080 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:15.080 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:15.080 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:15.080 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:15.080 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:15.080 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:15.080 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:15.080 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:15.080 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:15.080 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:15.080 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:15.080 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:15.080 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:15.080 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:15.080 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:15.080 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:15.080 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:15.080 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:15.080 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:15.080 [69/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:15.080 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:15.080 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:15.080 [72/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:15.080 [73/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:15.080 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:15.080 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:15.080 [76/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:15.080 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:15.080 [78/267] Linking static target lib/librte_meter.a 00:04:15.080 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:15.080 [80/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:15.080 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:15.080 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:15.080 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:15.080 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:15.080 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:04:15.080 [86/267] Linking static target lib/librte_ring.a 00:04:15.080 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:15.080 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:15.080 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:15.080 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:15.080 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:15.080 [92/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:15.080 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:15.080 [94/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:15.080 [95/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:15.340 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:15.340 [97/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:15.340 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:15.340 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:15.340 [100/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:15.340 [101/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:15.340 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:15.340 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:15.340 [104/267] Linking static target lib/librte_telemetry.a 00:04:15.340 [105/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:15.340 [106/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:15.340 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:15.340 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:15.340 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:15.340 [110/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:15.340 [111/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:15.340 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:15.340 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:15.340 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:15.340 [115/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:15.340 [116/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:15.340 [117/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:15.340 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:15.340 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:15.340 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:15.340 [121/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:15.340 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:15.340 [123/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:15.340 [124/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:15.340 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:15.340 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:15.340 [127/267] Linking static target lib/librte_cmdline.a 00:04:15.340 [128/267] Linking static target lib/librte_timer.a 00:04:15.340 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:15.340 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:15.340 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:15.340 [132/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:15.340 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:15.340 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:15.340 [135/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:15.340 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:15.340 [137/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:15.340 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:15.340 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:15.340 [140/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:15.340 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:15.340 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:15.340 [143/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:15.340 [144/267] Linking static target lib/librte_net.a 00:04:15.340 [145/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.340 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:15.340 [147/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:15.340 [148/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:15.340 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:15.340 [150/267] Linking static target lib/librte_power.a 00:04:15.340 [151/267] Linking static target lib/librte_dmadev.a 00:04:15.340 [152/267] Linking static target lib/librte_compressdev.a 00:04:15.340 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:15.340 [154/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:15.340 [155/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:15.340 [156/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:15.340 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:15.340 [158/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:15.340 [159/267] Linking target lib/librte_log.so.24.1 00:04:15.340 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:15.340 [161/267] Linking static target lib/librte_rcu.a 00:04:15.340 [162/267] Linking static target lib/librte_mempool.a 00:04:15.340 [163/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:15.340 [164/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:15.340 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:15.340 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:15.340 [167/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:15.340 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:15.340 [169/267] Linking static target lib/librte_eal.a 00:04:15.340 [170/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:15.340 [171/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:15.340 [172/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:15.340 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:15.340 [174/267] Linking static target lib/librte_security.a 00:04:15.340 [175/267] Linking static target lib/librte_reorder.a 00:04:15.340 [176/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:15.340 [177/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.340 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:15.340 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:15.601 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:15.601 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:15.601 [182/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.601 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:15.601 [184/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:15.601 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:15.601 [186/267] Linking static target lib/librte_hash.a 00:04:15.601 [187/267] Linking target lib/librte_kvargs.so.24.1 00:04:15.601 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:15.601 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:15.601 [190/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:15.601 [191/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:15.601 [192/267] Linking static target lib/librte_mbuf.a 00:04:15.602 [193/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:15.602 [194/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:15.602 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:15.602 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:15.602 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:15.602 [198/267] Linking static target drivers/librte_bus_vdev.a 00:04:15.602 [199/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:15.602 [200/267] Linking static target drivers/librte_bus_pci.a 00:04:15.602 [201/267] Linking static target drivers/librte_mempool_ring.a 00:04:15.602 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:15.602 [203/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:15.602 [204/267] Linking static target lib/librte_cryptodev.a 00:04:15.602 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:15.602 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.862 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:15.862 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.862 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.862 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.862 [211/267] Linking target lib/librte_telemetry.so.24.1 00:04:15.862 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.124 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.124 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:16.124 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.124 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.124 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.124 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:16.124 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:16.386 [220/267] Linking static target lib/librte_ethdev.a 00:04:16.386 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.386 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.386 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.646 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.646 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.646 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.219 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:17.219 [228/267] Linking static target lib/librte_vhost.a 00:04:17.791 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.703 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.282 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.855 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.116 [233/267] Linking target lib/librte_eal.so.24.1 00:04:27.116 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:27.116 [235/267] Linking target lib/librte_pci.so.24.1 00:04:27.116 [236/267] Linking target lib/librte_ring.so.24.1 00:04:27.116 [237/267] Linking target lib/librte_meter.so.24.1 00:04:27.116 [238/267] Linking target lib/librte_timer.so.24.1 00:04:27.116 [239/267] Linking target lib/librte_dmadev.so.24.1 00:04:27.116 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:27.377 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:27.377 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:27.377 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:27.377 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:27.377 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:27.377 [246/267] Linking target lib/librte_rcu.so.24.1 00:04:27.377 [247/267] Linking target lib/librte_mempool.so.24.1 00:04:27.377 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:27.377 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:27.377 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:27.637 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:27.637 [252/267] Linking target lib/librte_mbuf.so.24.1 00:04:27.637 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:27.637 [254/267] Linking target lib/librte_compressdev.so.24.1 00:04:27.637 [255/267] Linking target lib/librte_net.so.24.1 00:04:27.637 [256/267] Linking target lib/librte_reorder.so.24.1 00:04:27.637 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:04:27.898 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:27.898 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:27.898 [260/267] Linking target lib/librte_hash.so.24.1 00:04:27.898 [261/267] Linking target lib/librte_cmdline.so.24.1 00:04:27.898 [262/267] Linking target lib/librte_security.so.24.1 00:04:27.898 [263/267] Linking target lib/librte_ethdev.so.24.1 00:04:27.898 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:27.898 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:28.159 [266/267] Linking target lib/librte_power.so.24.1 00:04:28.159 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:28.159 INFO: autodetecting backend as ninja 00:04:28.159 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:04:31.456 CC lib/ut_mock/mock.o 00:04:31.456 CC lib/ut/ut.o 00:04:31.456 CC lib/log/log.o 00:04:31.456 CC lib/log/log_flags.o 00:04:31.456 CC lib/log/log_deprecated.o 00:04:31.718 LIB libspdk_ut.a 00:04:31.718 LIB libspdk_ut_mock.a 00:04:31.718 LIB libspdk_log.a 00:04:31.718 SO libspdk_ut.so.2.0 00:04:31.718 SO libspdk_log.so.7.1 00:04:31.718 SO libspdk_ut_mock.so.6.0 00:04:31.718 SYMLINK libspdk_ut.so 00:04:31.718 SYMLINK libspdk_ut_mock.so 00:04:31.718 SYMLINK libspdk_log.so 00:04:32.383 CXX lib/trace_parser/trace.o 00:04:32.383 CC lib/util/base64.o 00:04:32.383 CC lib/dma/dma.o 00:04:32.383 CC lib/ioat/ioat.o 00:04:32.383 CC lib/util/bit_array.o 00:04:32.383 CC lib/util/cpuset.o 00:04:32.383 CC lib/util/crc16.o 00:04:32.383 CC lib/util/crc32.o 00:04:32.383 CC lib/util/crc32c.o 00:04:32.383 CC lib/util/crc32_ieee.o 00:04:32.383 CC lib/util/crc64.o 00:04:32.383 CC lib/util/dif.o 00:04:32.383 CC lib/util/fd.o 00:04:32.383 CC lib/util/fd_group.o 00:04:32.383 CC lib/util/file.o 00:04:32.383 CC lib/util/hexlify.o 00:04:32.383 CC lib/util/iov.o 00:04:32.383 CC lib/util/math.o 00:04:32.383 CC lib/util/net.o 00:04:32.383 CC lib/util/pipe.o 00:04:32.383 CC lib/util/strerror_tls.o 00:04:32.383 CC lib/util/string.o 00:04:32.383 CC lib/util/uuid.o 00:04:32.383 CC lib/util/xor.o 00:04:32.383 CC lib/util/zipf.o 00:04:32.383 CC lib/util/md5.o 00:04:32.383 CC lib/vfio_user/host/vfio_user_pci.o 00:04:32.383 CC lib/vfio_user/host/vfio_user.o 00:04:32.383 LIB libspdk_dma.a 00:04:32.707 SO libspdk_dma.so.5.0 00:04:32.707 LIB libspdk_ioat.a 00:04:32.707 SYMLINK libspdk_dma.so 00:04:32.707 SO libspdk_ioat.so.7.0 00:04:32.707 LIB libspdk_vfio_user.a 00:04:32.707 SYMLINK libspdk_ioat.so 00:04:32.707 SO libspdk_vfio_user.so.5.0 00:04:32.707 SYMLINK libspdk_vfio_user.so 00:04:32.707 LIB libspdk_util.a 00:04:32.970 SO libspdk_util.so.10.1 00:04:32.970 SYMLINK libspdk_util.so 00:04:32.970 LIB libspdk_trace_parser.a 00:04:33.232 SO libspdk_trace_parser.so.6.0 00:04:33.232 SYMLINK libspdk_trace_parser.so 00:04:33.232 CC lib/json/json_parse.o 00:04:33.232 CC lib/json/json_util.o 00:04:33.232 CC lib/json/json_write.o 00:04:33.232 CC lib/idxd/idxd.o 00:04:33.232 CC lib/conf/conf.o 00:04:33.232 CC lib/env_dpdk/env.o 00:04:33.494 CC lib/idxd/idxd_user.o 00:04:33.494 CC lib/vmd/vmd.o 00:04:33.494 CC lib/env_dpdk/memory.o 00:04:33.494 CC lib/idxd/idxd_kernel.o 00:04:33.494 CC lib/rdma_utils/rdma_utils.o 00:04:33.494 CC lib/vmd/led.o 00:04:33.494 CC lib/env_dpdk/pci.o 00:04:33.494 CC lib/env_dpdk/init.o 00:04:33.494 CC lib/env_dpdk/threads.o 00:04:33.494 CC lib/env_dpdk/pci_ioat.o 00:04:33.494 CC lib/env_dpdk/pci_virtio.o 00:04:33.494 CC lib/env_dpdk/pci_vmd.o 00:04:33.494 CC lib/env_dpdk/pci_idxd.o 00:04:33.494 CC lib/env_dpdk/pci_event.o 00:04:33.494 CC lib/env_dpdk/sigbus_handler.o 00:04:33.494 CC lib/env_dpdk/pci_dpdk.o 00:04:33.494 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:33.494 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:33.755 LIB libspdk_conf.a 00:04:33.755 LIB libspdk_json.a 00:04:33.755 SO libspdk_conf.so.6.0 00:04:33.755 LIB libspdk_rdma_utils.a 00:04:33.755 SO libspdk_json.so.6.0 00:04:33.755 SO libspdk_rdma_utils.so.1.0 00:04:33.755 SYMLINK libspdk_conf.so 00:04:33.755 SYMLINK libspdk_rdma_utils.so 00:04:33.755 SYMLINK libspdk_json.so 00:04:34.016 LIB libspdk_idxd.a 00:04:34.016 SO libspdk_idxd.so.12.1 00:04:34.016 LIB libspdk_vmd.a 00:04:34.016 SO libspdk_vmd.so.6.0 00:04:34.016 SYMLINK libspdk_idxd.so 00:04:34.016 SYMLINK libspdk_vmd.so 00:04:34.016 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.016 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.016 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.016 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.276 CC lib/rdma_provider/common.o 00:04:34.276 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:34.276 LIB libspdk_rdma_provider.a 00:04:34.276 SO libspdk_rdma_provider.so.7.0 00:04:34.537 LIB libspdk_jsonrpc.a 00:04:34.537 SO libspdk_jsonrpc.so.6.0 00:04:34.537 SYMLINK libspdk_rdma_provider.so 00:04:34.537 SYMLINK libspdk_jsonrpc.so 00:04:34.537 LIB libspdk_env_dpdk.a 00:04:34.796 SO libspdk_env_dpdk.so.15.1 00:04:34.796 SYMLINK libspdk_env_dpdk.so 00:04:34.796 CC lib/rpc/rpc.o 00:04:35.057 LIB libspdk_rpc.a 00:04:35.057 SO libspdk_rpc.so.6.0 00:04:35.318 SYMLINK libspdk_rpc.so 00:04:35.578 CC lib/keyring/keyring.o 00:04:35.578 CC lib/trace/trace.o 00:04:35.578 CC lib/keyring/keyring_rpc.o 00:04:35.578 CC lib/trace/trace_flags.o 00:04:35.578 CC lib/trace/trace_rpc.o 00:04:35.578 CC lib/notify/notify.o 00:04:35.578 CC lib/notify/notify_rpc.o 00:04:35.838 LIB libspdk_notify.a 00:04:35.838 SO libspdk_notify.so.6.0 00:04:35.839 LIB libspdk_keyring.a 00:04:35.839 LIB libspdk_trace.a 00:04:35.839 SO libspdk_keyring.so.2.0 00:04:35.839 SO libspdk_trace.so.11.0 00:04:35.839 SYMLINK libspdk_notify.so 00:04:35.839 SYMLINK libspdk_keyring.so 00:04:35.839 SYMLINK libspdk_trace.so 00:04:36.416 CC lib/thread/thread.o 00:04:36.416 CC lib/thread/iobuf.o 00:04:36.416 CC lib/sock/sock.o 00:04:36.416 CC lib/sock/sock_rpc.o 00:04:36.676 LIB libspdk_sock.a 00:04:36.676 SO libspdk_sock.so.10.0 00:04:36.676 SYMLINK libspdk_sock.so 00:04:37.248 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:37.248 CC lib/nvme/nvme_ctrlr.o 00:04:37.248 CC lib/nvme/nvme_fabric.o 00:04:37.248 CC lib/nvme/nvme_ns_cmd.o 00:04:37.248 CC lib/nvme/nvme_ns.o 00:04:37.248 CC lib/nvme/nvme_pcie_common.o 00:04:37.248 CC lib/nvme/nvme_pcie.o 00:04:37.248 CC lib/nvme/nvme_qpair.o 00:04:37.248 CC lib/nvme/nvme.o 00:04:37.248 CC lib/nvme/nvme_quirks.o 00:04:37.248 CC lib/nvme/nvme_transport.o 00:04:37.248 CC lib/nvme/nvme_discovery.o 00:04:37.248 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:37.248 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:37.248 CC lib/nvme/nvme_tcp.o 00:04:37.248 CC lib/nvme/nvme_opal.o 00:04:37.248 CC lib/nvme/nvme_io_msg.o 00:04:37.248 CC lib/nvme/nvme_poll_group.o 00:04:37.248 CC lib/nvme/nvme_zns.o 00:04:37.248 CC lib/nvme/nvme_stubs.o 00:04:37.248 CC lib/nvme/nvme_auth.o 00:04:37.248 CC lib/nvme/nvme_cuse.o 00:04:37.248 CC lib/nvme/nvme_vfio_user.o 00:04:37.248 CC lib/nvme/nvme_rdma.o 00:04:37.820 LIB libspdk_thread.a 00:04:37.820 SO libspdk_thread.so.11.0 00:04:37.820 SYMLINK libspdk_thread.so 00:04:38.082 CC lib/accel/accel.o 00:04:38.082 CC lib/accel/accel_rpc.o 00:04:38.082 CC lib/accel/accel_sw.o 00:04:38.082 CC lib/vfu_tgt/tgt_endpoint.o 00:04:38.082 CC lib/init/json_config.o 00:04:38.082 CC lib/blob/blobstore.o 00:04:38.082 CC lib/fsdev/fsdev.o 00:04:38.082 CC lib/vfu_tgt/tgt_rpc.o 00:04:38.082 CC lib/init/subsystem.o 00:04:38.082 CC lib/init/subsystem_rpc.o 00:04:38.082 CC lib/fsdev/fsdev_io.o 00:04:38.082 CC lib/blob/request.o 00:04:38.082 CC lib/init/rpc.o 00:04:38.082 CC lib/fsdev/fsdev_rpc.o 00:04:38.082 CC lib/blob/zeroes.o 00:04:38.082 CC lib/blob/blob_bs_dev.o 00:04:38.082 CC lib/virtio/virtio.o 00:04:38.082 CC lib/virtio/virtio_vhost_user.o 00:04:38.082 CC lib/virtio/virtio_vfio_user.o 00:04:38.082 CC lib/virtio/virtio_pci.o 00:04:38.343 LIB libspdk_init.a 00:04:38.343 SO libspdk_init.so.6.0 00:04:38.605 LIB libspdk_vfu_tgt.a 00:04:38.605 LIB libspdk_virtio.a 00:04:38.605 SYMLINK libspdk_init.so 00:04:38.605 SO libspdk_vfu_tgt.so.3.0 00:04:38.605 SO libspdk_virtio.so.7.0 00:04:38.605 SYMLINK libspdk_vfu_tgt.so 00:04:38.605 SYMLINK libspdk_virtio.so 00:04:38.867 LIB libspdk_fsdev.a 00:04:38.867 SO libspdk_fsdev.so.2.0 00:04:38.867 CC lib/event/app.o 00:04:38.867 CC lib/event/reactor.o 00:04:38.867 CC lib/event/log_rpc.o 00:04:38.867 CC lib/event/app_rpc.o 00:04:38.867 CC lib/event/scheduler_static.o 00:04:38.867 SYMLINK libspdk_fsdev.so 00:04:39.131 LIB libspdk_accel.a 00:04:39.131 SO libspdk_accel.so.16.0 00:04:39.131 LIB libspdk_nvme.a 00:04:39.131 SYMLINK libspdk_accel.so 00:04:39.131 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:39.392 LIB libspdk_event.a 00:04:39.392 SO libspdk_nvme.so.15.0 00:04:39.392 SO libspdk_event.so.14.0 00:04:39.392 SYMLINK libspdk_event.so 00:04:39.654 CC lib/bdev/bdev.o 00:04:39.654 CC lib/bdev/bdev_rpc.o 00:04:39.654 CC lib/bdev/bdev_zone.o 00:04:39.654 CC lib/bdev/part.o 00:04:39.654 CC lib/bdev/scsi_nvme.o 00:04:39.654 SYMLINK libspdk_nvme.so 00:04:39.915 LIB libspdk_fuse_dispatcher.a 00:04:39.915 SO libspdk_fuse_dispatcher.so.1.0 00:04:39.915 SYMLINK libspdk_fuse_dispatcher.so 00:04:40.855 LIB libspdk_blob.a 00:04:40.855 SO libspdk_blob.so.12.0 00:04:40.855 SYMLINK libspdk_blob.so 00:04:41.425 CC lib/blobfs/blobfs.o 00:04:41.425 CC lib/blobfs/tree.o 00:04:41.425 CC lib/lvol/lvol.o 00:04:41.996 LIB libspdk_bdev.a 00:04:41.996 SO libspdk_bdev.so.17.0 00:04:41.996 LIB libspdk_blobfs.a 00:04:41.996 SO libspdk_blobfs.so.11.0 00:04:41.996 SYMLINK libspdk_bdev.so 00:04:41.996 LIB libspdk_lvol.a 00:04:42.257 SYMLINK libspdk_blobfs.so 00:04:42.257 SO libspdk_lvol.so.11.0 00:04:42.257 SYMLINK libspdk_lvol.so 00:04:42.519 CC lib/scsi/dev.o 00:04:42.519 CC lib/scsi/lun.o 00:04:42.519 CC lib/nvmf/ctrlr.o 00:04:42.519 CC lib/scsi/port.o 00:04:42.519 CC lib/nvmf/ctrlr_discovery.o 00:04:42.519 CC lib/scsi/scsi.o 00:04:42.519 CC lib/nvmf/ctrlr_bdev.o 00:04:42.519 CC lib/scsi/scsi_bdev.o 00:04:42.519 CC lib/scsi/scsi_pr.o 00:04:42.519 CC lib/nvmf/subsystem.o 00:04:42.519 CC lib/nvmf/nvmf.o 00:04:42.519 CC lib/scsi/scsi_rpc.o 00:04:42.519 CC lib/nvmf/nvmf_rpc.o 00:04:42.519 CC lib/scsi/task.o 00:04:42.519 CC lib/nvmf/transport.o 00:04:42.519 CC lib/nvmf/tcp.o 00:04:42.519 CC lib/nvmf/stubs.o 00:04:42.519 CC lib/nbd/nbd.o 00:04:42.519 CC lib/nvmf/mdns_server.o 00:04:42.519 CC lib/nbd/nbd_rpc.o 00:04:42.519 CC lib/ublk/ublk.o 00:04:42.519 CC lib/nvmf/vfio_user.o 00:04:42.519 CC lib/nvmf/rdma.o 00:04:42.519 CC lib/ftl/ftl_core.o 00:04:42.519 CC lib/ublk/ublk_rpc.o 00:04:42.519 CC lib/nvmf/auth.o 00:04:42.519 CC lib/ftl/ftl_init.o 00:04:42.519 CC lib/ftl/ftl_layout.o 00:04:42.519 CC lib/ftl/ftl_debug.o 00:04:42.519 CC lib/ftl/ftl_io.o 00:04:42.519 CC lib/ftl/ftl_sb.o 00:04:42.519 CC lib/ftl/ftl_l2p.o 00:04:42.519 CC lib/ftl/ftl_l2p_flat.o 00:04:42.519 CC lib/ftl/ftl_nv_cache.o 00:04:42.519 CC lib/ftl/ftl_band.o 00:04:42.519 CC lib/ftl/ftl_band_ops.o 00:04:42.519 CC lib/ftl/ftl_writer.o 00:04:42.519 CC lib/ftl/ftl_rq.o 00:04:42.519 CC lib/ftl/ftl_reloc.o 00:04:42.519 CC lib/ftl/ftl_l2p_cache.o 00:04:42.519 CC lib/ftl/ftl_p2l.o 00:04:42.519 CC lib/ftl/ftl_p2l_log.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:42.520 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:42.520 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:42.520 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.520 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.520 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.520 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.520 CC lib/ftl/utils/ftl_conf.o 00:04:42.520 CC lib/ftl/utils/ftl_md.o 00:04:42.520 CC lib/ftl/utils/ftl_mempool.o 00:04:42.520 CC lib/ftl/utils/ftl_bitmap.o 00:04:42.520 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.520 CC lib/ftl/utils/ftl_property.o 00:04:42.520 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.520 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.520 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.520 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.520 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.520 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:42.520 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.520 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.520 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.520 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.520 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:42.520 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:42.520 CC lib/ftl/base/ftl_base_bdev.o 00:04:42.520 CC lib/ftl/ftl_trace.o 00:04:42.520 CC lib/ftl/base/ftl_base_dev.o 00:04:43.461 LIB libspdk_nbd.a 00:04:43.461 SO libspdk_nbd.so.7.0 00:04:43.461 LIB libspdk_scsi.a 00:04:43.461 SYMLINK libspdk_nbd.so 00:04:43.461 SO libspdk_scsi.so.9.0 00:04:43.461 LIB libspdk_ublk.a 00:04:43.461 SYMLINK libspdk_scsi.so 00:04:43.461 SO libspdk_ublk.so.3.0 00:04:43.723 SYMLINK libspdk_ublk.so 00:04:43.723 LIB libspdk_ftl.a 00:04:43.984 CC lib/vhost/vhost.o 00:04:43.984 CC lib/vhost/vhost_scsi.o 00:04:43.984 CC lib/vhost/vhost_rpc.o 00:04:43.984 CC lib/vhost/vhost_blk.o 00:04:43.984 CC lib/vhost/rte_vhost_user.o 00:04:43.984 CC lib/iscsi/conn.o 00:04:43.984 CC lib/iscsi/init_grp.o 00:04:43.984 CC lib/iscsi/iscsi.o 00:04:43.984 CC lib/iscsi/param.o 00:04:43.984 CC lib/iscsi/portal_grp.o 00:04:43.984 CC lib/iscsi/tgt_node.o 00:04:43.984 CC lib/iscsi/iscsi_subsystem.o 00:04:43.984 CC lib/iscsi/iscsi_rpc.o 00:04:43.984 CC lib/iscsi/task.o 00:04:43.984 SO libspdk_ftl.so.9.0 00:04:44.245 SYMLINK libspdk_ftl.so 00:04:44.817 LIB libspdk_nvmf.a 00:04:44.817 SO libspdk_nvmf.so.20.0 00:04:44.817 LIB libspdk_vhost.a 00:04:44.817 SO libspdk_vhost.so.8.0 00:04:44.817 SYMLINK libspdk_nvmf.so 00:04:45.079 SYMLINK libspdk_vhost.so 00:04:45.079 LIB libspdk_iscsi.a 00:04:45.079 SO libspdk_iscsi.so.8.0 00:04:45.340 SYMLINK libspdk_iscsi.so 00:04:45.911 CC module/env_dpdk/env_dpdk_rpc.o 00:04:45.911 CC module/vfu_device/vfu_virtio.o 00:04:45.911 CC module/vfu_device/vfu_virtio_scsi.o 00:04:45.911 CC module/vfu_device/vfu_virtio_blk.o 00:04:45.911 CC module/vfu_device/vfu_virtio_rpc.o 00:04:45.911 CC module/vfu_device/vfu_virtio_fs.o 00:04:46.172 CC module/blob/bdev/blob_bdev.o 00:04:46.172 LIB libspdk_env_dpdk_rpc.a 00:04:46.172 CC module/fsdev/aio/fsdev_aio.o 00:04:46.172 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:46.172 CC module/fsdev/aio/linux_aio_mgr.o 00:04:46.172 CC module/accel/error/accel_error.o 00:04:46.172 CC module/accel/error/accel_error_rpc.o 00:04:46.172 CC module/accel/dsa/accel_dsa.o 00:04:46.172 CC module/accel/ioat/accel_ioat.o 00:04:46.172 CC module/accel/ioat/accel_ioat_rpc.o 00:04:46.172 CC module/accel/dsa/accel_dsa_rpc.o 00:04:46.172 CC module/scheduler/gscheduler/gscheduler.o 00:04:46.172 CC module/keyring/file/keyring.o 00:04:46.172 CC module/sock/posix/posix.o 00:04:46.172 CC module/keyring/linux/keyring.o 00:04:46.172 CC module/keyring/file/keyring_rpc.o 00:04:46.172 CC module/keyring/linux/keyring_rpc.o 00:04:46.173 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:46.173 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:46.173 CC module/accel/iaa/accel_iaa.o 00:04:46.173 CC module/accel/iaa/accel_iaa_rpc.o 00:04:46.173 SO libspdk_env_dpdk_rpc.so.6.0 00:04:46.173 SYMLINK libspdk_env_dpdk_rpc.so 00:04:46.173 LIB libspdk_scheduler_gscheduler.a 00:04:46.173 LIB libspdk_keyring_linux.a 00:04:46.173 LIB libspdk_keyring_file.a 00:04:46.173 LIB libspdk_scheduler_dpdk_governor.a 00:04:46.433 SO libspdk_keyring_linux.so.1.0 00:04:46.433 SO libspdk_scheduler_gscheduler.so.4.0 00:04:46.433 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:46.433 SO libspdk_keyring_file.so.2.0 00:04:46.433 LIB libspdk_accel_ioat.a 00:04:46.433 LIB libspdk_accel_iaa.a 00:04:46.433 LIB libspdk_accel_error.a 00:04:46.433 LIB libspdk_scheduler_dynamic.a 00:04:46.433 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:46.433 SO libspdk_accel_ioat.so.6.0 00:04:46.433 SO libspdk_accel_iaa.so.3.0 00:04:46.433 SO libspdk_accel_error.so.2.0 00:04:46.433 SO libspdk_scheduler_dynamic.so.4.0 00:04:46.433 SYMLINK libspdk_scheduler_gscheduler.so 00:04:46.433 LIB libspdk_blob_bdev.a 00:04:46.433 SYMLINK libspdk_keyring_linux.so 00:04:46.433 SYMLINK libspdk_keyring_file.so 00:04:46.433 LIB libspdk_accel_dsa.a 00:04:46.433 SO libspdk_blob_bdev.so.12.0 00:04:46.433 SYMLINK libspdk_accel_error.so 00:04:46.433 SYMLINK libspdk_accel_ioat.so 00:04:46.433 SYMLINK libspdk_scheduler_dynamic.so 00:04:46.433 SO libspdk_accel_dsa.so.5.0 00:04:46.433 SYMLINK libspdk_accel_iaa.so 00:04:46.433 LIB libspdk_vfu_device.a 00:04:46.433 SYMLINK libspdk_blob_bdev.so 00:04:46.433 SYMLINK libspdk_accel_dsa.so 00:04:46.433 SO libspdk_vfu_device.so.3.0 00:04:46.702 SYMLINK libspdk_vfu_device.so 00:04:46.702 LIB libspdk_fsdev_aio.a 00:04:46.702 SO libspdk_fsdev_aio.so.1.0 00:04:46.702 LIB libspdk_sock_posix.a 00:04:46.964 SO libspdk_sock_posix.so.6.0 00:04:46.964 SYMLINK libspdk_fsdev_aio.so 00:04:46.964 SYMLINK libspdk_sock_posix.so 00:04:46.964 CC module/bdev/lvol/vbdev_lvol.o 00:04:46.964 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:46.964 CC module/bdev/delay/vbdev_delay.o 00:04:46.964 CC module/bdev/error/vbdev_error.o 00:04:46.964 CC module/bdev/gpt/gpt.o 00:04:46.964 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:46.964 CC module/bdev/error/vbdev_error_rpc.o 00:04:46.964 CC module/bdev/gpt/vbdev_gpt.o 00:04:46.964 CC module/bdev/aio/bdev_aio.o 00:04:46.964 CC module/blobfs/bdev/blobfs_bdev.o 00:04:46.964 CC module/bdev/nvme/bdev_nvme.o 00:04:46.964 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:46.964 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.964 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:46.964 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:46.964 CC module/bdev/raid/bdev_raid.o 00:04:46.964 CC module/bdev/nvme/nvme_rpc.o 00:04:46.964 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.964 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.964 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.964 CC module/bdev/raid/bdev_raid_rpc.o 00:04:46.964 CC module/bdev/raid/bdev_raid_sb.o 00:04:46.964 CC module/bdev/passthru/vbdev_passthru.o 00:04:46.964 CC module/bdev/nvme/vbdev_opal.o 00:04:46.964 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.964 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.964 CC module/bdev/null/bdev_null.o 00:04:46.964 CC module/bdev/raid/raid0.o 00:04:46.964 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:46.965 CC module/bdev/raid/raid1.o 00:04:46.965 CC module/bdev/null/bdev_null_rpc.o 00:04:46.965 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.965 CC module/bdev/malloc/bdev_malloc.o 00:04:46.965 CC module/bdev/raid/concat.o 00:04:46.965 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:46.965 CC module/bdev/split/vbdev_split.o 00:04:46.965 CC module/bdev/ftl/bdev_ftl.o 00:04:46.965 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:46.965 CC module/bdev/split/vbdev_split_rpc.o 00:04:47.224 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:47.224 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:47.224 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:47.484 LIB libspdk_blobfs_bdev.a 00:04:47.484 SO libspdk_blobfs_bdev.so.6.0 00:04:47.484 LIB libspdk_bdev_error.a 00:04:47.484 LIB libspdk_bdev_gpt.a 00:04:47.484 LIB libspdk_bdev_split.a 00:04:47.484 SO libspdk_bdev_gpt.so.6.0 00:04:47.484 SO libspdk_bdev_error.so.6.0 00:04:47.484 LIB libspdk_bdev_null.a 00:04:47.484 SYMLINK libspdk_blobfs_bdev.so 00:04:47.484 LIB libspdk_bdev_passthru.a 00:04:47.484 SO libspdk_bdev_split.so.6.0 00:04:47.484 LIB libspdk_bdev_ftl.a 00:04:47.484 LIB libspdk_bdev_zone_block.a 00:04:47.484 LIB libspdk_bdev_delay.a 00:04:47.484 SO libspdk_bdev_null.so.6.0 00:04:47.484 SYMLINK libspdk_bdev_gpt.so 00:04:47.484 SO libspdk_bdev_passthru.so.6.0 00:04:47.484 SO libspdk_bdev_delay.so.6.0 00:04:47.484 LIB libspdk_bdev_aio.a 00:04:47.484 SO libspdk_bdev_zone_block.so.6.0 00:04:47.484 SYMLINK libspdk_bdev_error.so 00:04:47.484 SO libspdk_bdev_ftl.so.6.0 00:04:47.484 SYMLINK libspdk_bdev_split.so 00:04:47.484 LIB libspdk_bdev_iscsi.a 00:04:47.484 LIB libspdk_bdev_malloc.a 00:04:47.484 SO libspdk_bdev_aio.so.6.0 00:04:47.745 SYMLINK libspdk_bdev_null.so 00:04:47.745 SO libspdk_bdev_malloc.so.6.0 00:04:47.745 SYMLINK libspdk_bdev_passthru.so 00:04:47.745 SO libspdk_bdev_iscsi.so.6.0 00:04:47.745 SYMLINK libspdk_bdev_delay.so 00:04:47.745 SYMLINK libspdk_bdev_ftl.so 00:04:47.745 SYMLINK libspdk_bdev_zone_block.so 00:04:47.745 LIB libspdk_bdev_lvol.a 00:04:47.745 SYMLINK libspdk_bdev_aio.so 00:04:47.745 SO libspdk_bdev_lvol.so.6.0 00:04:47.745 SYMLINK libspdk_bdev_malloc.so 00:04:47.745 SYMLINK libspdk_bdev_iscsi.so 00:04:47.745 LIB libspdk_bdev_virtio.a 00:04:47.745 SYMLINK libspdk_bdev_lvol.so 00:04:47.745 SO libspdk_bdev_virtio.so.6.0 00:04:47.745 SYMLINK libspdk_bdev_virtio.so 00:04:48.008 LIB libspdk_bdev_raid.a 00:04:48.270 SO libspdk_bdev_raid.so.6.0 00:04:48.270 SYMLINK libspdk_bdev_raid.so 00:04:49.673 LIB libspdk_bdev_nvme.a 00:04:49.673 SO libspdk_bdev_nvme.so.7.1 00:04:49.673 SYMLINK libspdk_bdev_nvme.so 00:04:50.246 CC module/event/subsystems/vmd/vmd.o 00:04:50.246 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.246 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.246 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.246 CC module/event/subsystems/sock/sock.o 00:04:50.246 CC module/event/subsystems/keyring/keyring.o 00:04:50.246 CC module/event/subsystems/fsdev/fsdev.o 00:04:50.246 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:50.246 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.246 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.509 LIB libspdk_event_keyring.a 00:04:50.509 LIB libspdk_event_vfu_tgt.a 00:04:50.509 LIB libspdk_event_vmd.a 00:04:50.509 LIB libspdk_event_fsdev.a 00:04:50.509 LIB libspdk_event_sock.a 00:04:50.509 LIB libspdk_event_vhost_blk.a 00:04:50.509 LIB libspdk_event_scheduler.a 00:04:50.509 LIB libspdk_event_iobuf.a 00:04:50.509 SO libspdk_event_keyring.so.1.0 00:04:50.509 SO libspdk_event_vfu_tgt.so.3.0 00:04:50.509 SO libspdk_event_vmd.so.6.0 00:04:50.509 SO libspdk_event_fsdev.so.1.0 00:04:50.509 SO libspdk_event_vhost_blk.so.3.0 00:04:50.509 SO libspdk_event_sock.so.5.0 00:04:50.509 SO libspdk_event_iobuf.so.3.0 00:04:50.509 SO libspdk_event_scheduler.so.4.0 00:04:50.509 SYMLINK libspdk_event_keyring.so 00:04:50.772 SYMLINK libspdk_event_vfu_tgt.so 00:04:50.772 SYMLINK libspdk_event_vmd.so 00:04:50.772 SYMLINK libspdk_event_fsdev.so 00:04:50.772 SYMLINK libspdk_event_sock.so 00:04:50.772 SYMLINK libspdk_event_vhost_blk.so 00:04:50.772 SYMLINK libspdk_event_scheduler.so 00:04:50.772 SYMLINK libspdk_event_iobuf.so 00:04:51.033 CC module/event/subsystems/accel/accel.o 00:04:51.294 LIB libspdk_event_accel.a 00:04:51.294 SO libspdk_event_accel.so.6.0 00:04:51.294 SYMLINK libspdk_event_accel.so 00:04:51.555 CC module/event/subsystems/bdev/bdev.o 00:04:51.816 LIB libspdk_event_bdev.a 00:04:51.816 SO libspdk_event_bdev.so.6.0 00:04:51.816 SYMLINK libspdk_event_bdev.so 00:04:52.388 CC module/event/subsystems/scsi/scsi.o 00:04:52.388 CC module/event/subsystems/nbd/nbd.o 00:04:52.388 CC module/event/subsystems/ublk/ublk.o 00:04:52.388 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.388 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.388 LIB libspdk_event_ublk.a 00:04:52.388 LIB libspdk_event_nbd.a 00:04:52.388 LIB libspdk_event_scsi.a 00:04:52.388 SO libspdk_event_ublk.so.3.0 00:04:52.388 SO libspdk_event_nbd.so.6.0 00:04:52.650 SO libspdk_event_scsi.so.6.0 00:04:52.650 LIB libspdk_event_nvmf.a 00:04:52.650 SYMLINK libspdk_event_ublk.so 00:04:52.650 SYMLINK libspdk_event_nbd.so 00:04:52.650 SYMLINK libspdk_event_scsi.so 00:04:52.650 SO libspdk_event_nvmf.so.6.0 00:04:52.650 SYMLINK libspdk_event_nvmf.so 00:04:52.911 CC module/event/subsystems/iscsi/iscsi.o 00:04:52.911 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:53.172 LIB libspdk_event_vhost_scsi.a 00:04:53.172 LIB libspdk_event_iscsi.a 00:04:53.172 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.172 SO libspdk_event_iscsi.so.6.0 00:04:53.172 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.172 SYMLINK libspdk_event_iscsi.so 00:04:53.432 SO libspdk.so.6.0 00:04:53.432 SYMLINK libspdk.so 00:04:54.006 CC app/trace_record/trace_record.o 00:04:54.006 CXX app/trace/trace.o 00:04:54.006 CC app/spdk_nvme_discover/discovery_aer.o 00:04:54.006 CC app/spdk_top/spdk_top.o 00:04:54.006 CC test/rpc_client/rpc_client_test.o 00:04:54.006 CC app/spdk_lspci/spdk_lspci.o 00:04:54.006 CC app/spdk_nvme_identify/identify.o 00:04:54.006 TEST_HEADER include/spdk/accel.h 00:04:54.006 CC app/spdk_nvme_perf/perf.o 00:04:54.006 TEST_HEADER include/spdk/accel_module.h 00:04:54.006 TEST_HEADER include/spdk/barrier.h 00:04:54.006 TEST_HEADER include/spdk/assert.h 00:04:54.006 TEST_HEADER include/spdk/base64.h 00:04:54.006 TEST_HEADER include/spdk/bdev.h 00:04:54.006 TEST_HEADER include/spdk/bdev_module.h 00:04:54.006 TEST_HEADER include/spdk/bdev_zone.h 00:04:54.006 TEST_HEADER include/spdk/bit_array.h 00:04:54.006 TEST_HEADER include/spdk/bit_pool.h 00:04:54.006 TEST_HEADER include/spdk/blob_bdev.h 00:04:54.006 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:54.006 TEST_HEADER include/spdk/blobfs.h 00:04:54.006 TEST_HEADER include/spdk/blob.h 00:04:54.006 TEST_HEADER include/spdk/conf.h 00:04:54.006 TEST_HEADER include/spdk/config.h 00:04:54.006 TEST_HEADER include/spdk/cpuset.h 00:04:54.006 TEST_HEADER include/spdk/crc16.h 00:04:54.006 TEST_HEADER include/spdk/crc64.h 00:04:54.006 TEST_HEADER include/spdk/crc32.h 00:04:54.006 TEST_HEADER include/spdk/dif.h 00:04:54.006 TEST_HEADER include/spdk/dma.h 00:04:54.006 TEST_HEADER include/spdk/endian.h 00:04:54.006 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:54.006 TEST_HEADER include/spdk/env_dpdk.h 00:04:54.006 CC app/nvmf_tgt/nvmf_main.o 00:04:54.006 TEST_HEADER include/spdk/env.h 00:04:54.006 TEST_HEADER include/spdk/fd_group.h 00:04:54.006 TEST_HEADER include/spdk/event.h 00:04:54.006 CC app/iscsi_tgt/iscsi_tgt.o 00:04:54.006 TEST_HEADER include/spdk/fd.h 00:04:54.006 TEST_HEADER include/spdk/file.h 00:04:54.006 CC app/spdk_dd/spdk_dd.o 00:04:54.006 TEST_HEADER include/spdk/fsdev.h 00:04:54.006 TEST_HEADER include/spdk/fsdev_module.h 00:04:54.006 TEST_HEADER include/spdk/ftl.h 00:04:54.006 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:54.006 TEST_HEADER include/spdk/hexlify.h 00:04:54.006 TEST_HEADER include/spdk/gpt_spec.h 00:04:54.006 TEST_HEADER include/spdk/histogram_data.h 00:04:54.006 TEST_HEADER include/spdk/idxd.h 00:04:54.006 TEST_HEADER include/spdk/idxd_spec.h 00:04:54.006 TEST_HEADER include/spdk/init.h 00:04:54.006 TEST_HEADER include/spdk/ioat.h 00:04:54.006 TEST_HEADER include/spdk/ioat_spec.h 00:04:54.006 TEST_HEADER include/spdk/iscsi_spec.h 00:04:54.006 TEST_HEADER include/spdk/json.h 00:04:54.006 TEST_HEADER include/spdk/jsonrpc.h 00:04:54.006 TEST_HEADER include/spdk/keyring.h 00:04:54.006 TEST_HEADER include/spdk/likely.h 00:04:54.006 TEST_HEADER include/spdk/keyring_module.h 00:04:54.006 CC app/spdk_tgt/spdk_tgt.o 00:04:54.006 TEST_HEADER include/spdk/log.h 00:04:54.006 TEST_HEADER include/spdk/lvol.h 00:04:54.006 TEST_HEADER include/spdk/md5.h 00:04:54.006 TEST_HEADER include/spdk/memory.h 00:04:54.006 TEST_HEADER include/spdk/mmio.h 00:04:54.006 TEST_HEADER include/spdk/nbd.h 00:04:54.006 TEST_HEADER include/spdk/net.h 00:04:54.006 TEST_HEADER include/spdk/notify.h 00:04:54.006 TEST_HEADER include/spdk/nvme.h 00:04:54.006 TEST_HEADER include/spdk/nvme_intel.h 00:04:54.006 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:54.006 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:54.006 TEST_HEADER include/spdk/nvme_spec.h 00:04:54.006 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:54.006 TEST_HEADER include/spdk/nvme_zns.h 00:04:54.006 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:54.006 TEST_HEADER include/spdk/nvmf.h 00:04:54.006 TEST_HEADER include/spdk/nvmf_spec.h 00:04:54.006 TEST_HEADER include/spdk/nvmf_transport.h 00:04:54.006 TEST_HEADER include/spdk/opal.h 00:04:54.006 TEST_HEADER include/spdk/opal_spec.h 00:04:54.006 TEST_HEADER include/spdk/pci_ids.h 00:04:54.006 TEST_HEADER include/spdk/queue.h 00:04:54.006 TEST_HEADER include/spdk/pipe.h 00:04:54.006 TEST_HEADER include/spdk/reduce.h 00:04:54.006 TEST_HEADER include/spdk/rpc.h 00:04:54.006 TEST_HEADER include/spdk/scheduler.h 00:04:54.006 TEST_HEADER include/spdk/scsi.h 00:04:54.006 TEST_HEADER include/spdk/scsi_spec.h 00:04:54.006 TEST_HEADER include/spdk/sock.h 00:04:54.006 TEST_HEADER include/spdk/stdinc.h 00:04:54.006 TEST_HEADER include/spdk/string.h 00:04:54.006 TEST_HEADER include/spdk/thread.h 00:04:54.006 TEST_HEADER include/spdk/trace.h 00:04:54.006 TEST_HEADER include/spdk/trace_parser.h 00:04:54.006 TEST_HEADER include/spdk/tree.h 00:04:54.006 TEST_HEADER include/spdk/ublk.h 00:04:54.006 TEST_HEADER include/spdk/util.h 00:04:54.006 TEST_HEADER include/spdk/uuid.h 00:04:54.006 TEST_HEADER include/spdk/version.h 00:04:54.006 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:54.006 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:54.006 TEST_HEADER include/spdk/vhost.h 00:04:54.006 TEST_HEADER include/spdk/vmd.h 00:04:54.006 TEST_HEADER include/spdk/xor.h 00:04:54.006 TEST_HEADER include/spdk/zipf.h 00:04:54.006 CXX test/cpp_headers/accel.o 00:04:54.006 CXX test/cpp_headers/accel_module.o 00:04:54.006 CXX test/cpp_headers/assert.o 00:04:54.006 CXX test/cpp_headers/barrier.o 00:04:54.006 CXX test/cpp_headers/base64.o 00:04:54.006 CXX test/cpp_headers/bdev.o 00:04:54.006 CXX test/cpp_headers/bdev_module.o 00:04:54.006 CXX test/cpp_headers/bdev_zone.o 00:04:54.006 CXX test/cpp_headers/bit_array.o 00:04:54.006 CXX test/cpp_headers/blob_bdev.o 00:04:54.006 CXX test/cpp_headers/bit_pool.o 00:04:54.006 CXX test/cpp_headers/blobfs_bdev.o 00:04:54.006 CXX test/cpp_headers/blobfs.o 00:04:54.006 CXX test/cpp_headers/blob.o 00:04:54.006 CXX test/cpp_headers/config.o 00:04:54.006 CXX test/cpp_headers/conf.o 00:04:54.006 CXX test/cpp_headers/cpuset.o 00:04:54.006 CXX test/cpp_headers/crc16.o 00:04:54.006 CXX test/cpp_headers/crc32.o 00:04:54.006 CXX test/cpp_headers/crc64.o 00:04:54.006 CXX test/cpp_headers/dma.o 00:04:54.006 CXX test/cpp_headers/dif.o 00:04:54.006 CXX test/cpp_headers/endian.o 00:04:54.006 CXX test/cpp_headers/env.o 00:04:54.006 CXX test/cpp_headers/env_dpdk.o 00:04:54.006 CXX test/cpp_headers/event.o 00:04:54.006 CXX test/cpp_headers/fd_group.o 00:04:54.006 CXX test/cpp_headers/fd.o 00:04:54.006 CXX test/cpp_headers/fsdev.o 00:04:54.006 CXX test/cpp_headers/file.o 00:04:54.006 CXX test/cpp_headers/ftl.o 00:04:54.006 CXX test/cpp_headers/fsdev_module.o 00:04:54.006 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.006 CXX test/cpp_headers/gpt_spec.o 00:04:54.006 CXX test/cpp_headers/histogram_data.o 00:04:54.006 CXX test/cpp_headers/hexlify.o 00:04:54.006 CXX test/cpp_headers/idxd_spec.o 00:04:54.006 CXX test/cpp_headers/idxd.o 00:04:54.006 CXX test/cpp_headers/init.o 00:04:54.006 CXX test/cpp_headers/ioat.o 00:04:54.006 CXX test/cpp_headers/iscsi_spec.o 00:04:54.006 CXX test/cpp_headers/ioat_spec.o 00:04:54.006 CXX test/cpp_headers/jsonrpc.o 00:04:54.007 CXX test/cpp_headers/json.o 00:04:54.007 CXX test/cpp_headers/keyring.o 00:04:54.007 CXX test/cpp_headers/log.o 00:04:54.007 CXX test/cpp_headers/keyring_module.o 00:04:54.007 CXX test/cpp_headers/likely.o 00:04:54.007 CXX test/cpp_headers/md5.o 00:04:54.007 CXX test/cpp_headers/lvol.o 00:04:54.007 CXX test/cpp_headers/memory.o 00:04:54.007 CXX test/cpp_headers/net.o 00:04:54.007 CXX test/cpp_headers/nbd.o 00:04:54.007 CXX test/cpp_headers/nvme.o 00:04:54.007 CXX test/cpp_headers/mmio.o 00:04:54.007 CXX test/cpp_headers/notify.o 00:04:54.277 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.277 CC test/thread/poller_perf/poller_perf.o 00:04:54.277 CC test/app/stub/stub.o 00:04:54.277 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.277 CXX test/cpp_headers/nvme_spec.o 00:04:54.277 CC examples/ioat/verify/verify.o 00:04:54.277 CXX test/cpp_headers/nvme_intel.o 00:04:54.277 CC test/env/vtophys/vtophys.o 00:04:54.277 CXX test/cpp_headers/nvme_zns.o 00:04:54.277 CC examples/util/zipf/zipf.o 00:04:54.277 LINK spdk_lspci 00:04:54.277 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.277 CC test/app/histogram_perf/histogram_perf.o 00:04:54.277 CXX test/cpp_headers/nvmf.o 00:04:54.277 CC examples/ioat/perf/perf.o 00:04:54.277 CC app/fio/nvme/fio_plugin.o 00:04:54.277 CXX test/cpp_headers/opal_spec.o 00:04:54.277 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.277 CXX test/cpp_headers/nvmf_transport.o 00:04:54.277 CXX test/cpp_headers/nvmf_spec.o 00:04:54.277 CXX test/cpp_headers/opal.o 00:04:54.277 CXX test/cpp_headers/pci_ids.o 00:04:54.277 CC test/app/jsoncat/jsoncat.o 00:04:54.277 CXX test/cpp_headers/queue.o 00:04:54.277 CXX test/cpp_headers/pipe.o 00:04:54.277 CXX test/cpp_headers/scheduler.o 00:04:54.277 CXX test/cpp_headers/reduce.o 00:04:54.277 CXX test/cpp_headers/rpc.o 00:04:54.277 CC test/app/bdev_svc/bdev_svc.o 00:04:54.277 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.277 CXX test/cpp_headers/scsi.o 00:04:54.277 CC test/env/memory/memory_ut.o 00:04:54.277 CXX test/cpp_headers/scsi_spec.o 00:04:54.277 CXX test/cpp_headers/thread.o 00:04:54.277 CXX test/cpp_headers/sock.o 00:04:54.277 CXX test/cpp_headers/trace.o 00:04:54.277 CXX test/cpp_headers/string.o 00:04:54.277 CXX test/cpp_headers/stdinc.o 00:04:54.277 CC test/env/pci/pci_ut.o 00:04:54.277 CXX test/cpp_headers/trace_parser.o 00:04:54.277 CXX test/cpp_headers/tree.o 00:04:54.277 CXX test/cpp_headers/ublk.o 00:04:54.277 CC test/dma/test_dma/test_dma.o 00:04:54.277 CXX test/cpp_headers/util.o 00:04:54.277 CXX test/cpp_headers/uuid.o 00:04:54.277 CXX test/cpp_headers/version.o 00:04:54.277 LINK rpc_client_test 00:04:54.277 CXX test/cpp_headers/vfio_user_spec.o 00:04:54.277 CXX test/cpp_headers/vfio_user_pci.o 00:04:54.277 CXX test/cpp_headers/vhost.o 00:04:54.277 CXX test/cpp_headers/vmd.o 00:04:54.277 CXX test/cpp_headers/xor.o 00:04:54.277 CXX test/cpp_headers/zipf.o 00:04:54.277 CC app/fio/bdev/fio_plugin.o 00:04:54.277 LINK interrupt_tgt 00:04:54.277 LINK nvmf_tgt 00:04:54.277 LINK spdk_trace_record 00:04:54.550 LINK spdk_nvme_discover 00:04:54.551 LINK iscsi_tgt 00:04:54.551 LINK spdk_tgt 00:04:55.126 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:55.126 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:55.126 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:55.126 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:55.126 CC test/env/mem_callbacks/mem_callbacks.o 00:04:55.126 LINK spdk_trace 00:04:55.126 LINK zipf 00:04:55.126 LINK histogram_perf 00:04:55.126 LINK spdk_dd 00:04:55.126 LINK vtophys 00:04:55.384 LINK poller_perf 00:04:55.384 LINK env_dpdk_post_init 00:04:55.384 LINK stub 00:04:55.384 LINK jsoncat 00:04:55.384 LINK bdev_svc 00:04:55.384 LINK ioat_perf 00:04:55.384 LINK verify 00:04:55.642 LINK pci_ut 00:04:55.642 CC app/vhost/vhost.o 00:04:55.642 LINK spdk_top 00:04:55.642 LINK spdk_nvme_perf 00:04:55.642 LINK test_dma 00:04:55.642 CC examples/idxd/perf/perf.o 00:04:55.902 CC examples/sock/hello_world/hello_sock.o 00:04:55.902 CC examples/vmd/led/led.o 00:04:55.902 CC examples/vmd/lsvmd/lsvmd.o 00:04:55.902 LINK spdk_nvme 00:04:55.902 LINK spdk_nvme_identify 00:04:55.902 LINK nvme_fuzz 00:04:55.902 LINK vhost_fuzz 00:04:55.902 CC examples/thread/thread/thread_ex.o 00:04:55.902 CC test/event/event_perf/event_perf.o 00:04:55.902 CC test/event/reactor_perf/reactor_perf.o 00:04:55.902 CC test/event/reactor/reactor.o 00:04:55.902 CC test/event/app_repeat/app_repeat.o 00:04:55.902 LINK vhost 00:04:55.902 CC test/event/scheduler/scheduler.o 00:04:55.902 LINK spdk_bdev 00:04:55.902 LINK led 00:04:55.902 LINK lsvmd 00:04:55.902 LINK mem_callbacks 00:04:56.160 LINK hello_sock 00:04:56.160 LINK event_perf 00:04:56.160 LINK reactor_perf 00:04:56.160 LINK reactor 00:04:56.160 LINK app_repeat 00:04:56.160 LINK idxd_perf 00:04:56.160 LINK thread 00:04:56.160 LINK scheduler 00:04:56.420 CC test/nvme/startup/startup.o 00:04:56.420 CC test/nvme/reset/reset.o 00:04:56.420 CC test/nvme/overhead/overhead.o 00:04:56.420 CC test/nvme/aer/aer.o 00:04:56.420 CC test/nvme/simple_copy/simple_copy.o 00:04:56.420 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.420 CC test/nvme/compliance/nvme_compliance.o 00:04:56.420 CC test/nvme/cuse/cuse.o 00:04:56.420 CC test/nvme/sgl/sgl.o 00:04:56.420 CC test/accel/dif/dif.o 00:04:56.420 CC test/nvme/fdp/fdp.o 00:04:56.420 CC test/nvme/boot_partition/boot_partition.o 00:04:56.420 CC test/nvme/connect_stress/connect_stress.o 00:04:56.420 CC test/nvme/reserve/reserve.o 00:04:56.420 CC test/nvme/e2edp/nvme_dp.o 00:04:56.420 CC test/nvme/err_injection/err_injection.o 00:04:56.420 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.420 CC test/blobfs/mkfs/mkfs.o 00:04:56.420 CC test/lvol/esnap/esnap.o 00:04:56.420 LINK memory_ut 00:04:56.420 LINK err_injection 00:04:56.420 LINK startup 00:04:56.420 LINK fused_ordering 00:04:56.679 LINK connect_stress 00:04:56.679 LINK boot_partition 00:04:56.679 LINK simple_copy 00:04:56.679 LINK reserve 00:04:56.679 CC examples/nvme/arbitration/arbitration.o 00:04:56.679 LINK doorbell_aers 00:04:56.679 LINK reset 00:04:56.679 CC examples/nvme/hotplug/hotplug.o 00:04:56.679 CC examples/nvme/reconnect/reconnect.o 00:04:56.679 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.679 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:56.679 CC examples/nvme/hello_world/hello_world.o 00:04:56.679 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:56.679 CC examples/nvme/abort/abort.o 00:04:56.679 LINK sgl 00:04:56.679 LINK aer 00:04:56.679 LINK overhead 00:04:56.679 LINK mkfs 00:04:56.679 LINK nvme_dp 00:04:56.679 LINK nvme_compliance 00:04:56.679 LINK fdp 00:04:56.679 CC examples/accel/perf/accel_perf.o 00:04:56.679 LINK iscsi_fuzz 00:04:56.679 CC examples/blob/cli/blobcli.o 00:04:56.679 CC examples/blob/hello_world/hello_blob.o 00:04:56.679 LINK pmr_persistence 00:04:56.679 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:56.679 LINK cmb_copy 00:04:56.941 LINK hotplug 00:04:56.941 LINK hello_world 00:04:56.941 LINK arbitration 00:04:56.941 LINK reconnect 00:04:56.941 LINK dif 00:04:56.941 LINK abort 00:04:56.941 LINK nvme_manage 00:04:56.941 LINK hello_blob 00:04:57.202 LINK hello_fsdev 00:04:57.202 LINK accel_perf 00:04:57.202 LINK blobcli 00:04:57.462 LINK cuse 00:04:57.462 CC test/bdev/bdevio/bdevio.o 00:04:57.724 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.724 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.986 LINK bdevio 00:04:57.986 LINK hello_bdev 00:04:58.558 LINK bdevperf 00:04:59.129 CC examples/nvmf/nvmf/nvmf.o 00:04:59.390 LINK nvmf 00:05:01.306 LINK esnap 00:05:01.306 00:05:01.306 real 0m56.602s 00:05:01.306 user 8m8.943s 00:05:01.306 sys 6m9.599s 00:05:01.306 17:20:53 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:01.306 17:20:53 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.306 ************************************ 00:05:01.306 END TEST make 00:05:01.306 ************************************ 00:05:01.306 17:20:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:01.306 17:20:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:01.306 17:20:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:01.306 17:20:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.306 17:20:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:01.306 17:20:53 -- pm/common@44 -- $ pid=1386239 00:05:01.306 17:20:53 -- pm/common@50 -- $ kill -TERM 1386239 00:05:01.306 17:20:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.306 17:20:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:01.306 17:20:53 -- pm/common@44 -- $ pid=1386240 00:05:01.306 17:20:53 -- pm/common@50 -- $ kill -TERM 1386240 00:05:01.306 17:20:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.306 17:20:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:01.306 17:20:53 -- pm/common@44 -- $ pid=1386242 00:05:01.306 17:20:53 -- pm/common@50 -- $ kill -TERM 1386242 00:05:01.306 17:20:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.306 17:20:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:01.306 17:20:53 -- pm/common@44 -- $ pid=1386265 00:05:01.306 17:20:53 -- pm/common@50 -- $ sudo -E kill -TERM 1386265 00:05:01.306 17:20:53 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:01.306 17:20:53 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:01.568 17:20:53 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.568 17:20:53 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.568 17:20:53 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.568 17:20:53 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.568 17:20:53 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.568 17:20:53 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.568 17:20:53 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.568 17:20:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.568 17:20:53 -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.568 17:20:53 -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.568 17:20:53 -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.568 17:20:53 -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.568 17:20:53 -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.568 17:20:53 -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.568 17:20:53 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.568 17:20:53 -- scripts/common.sh@344 -- # case "$op" in 00:05:01.568 17:20:53 -- scripts/common.sh@345 -- # : 1 00:05:01.568 17:20:53 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.568 17:20:53 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.568 17:20:53 -- scripts/common.sh@365 -- # decimal 1 00:05:01.568 17:20:53 -- scripts/common.sh@353 -- # local d=1 00:05:01.568 17:20:53 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.568 17:20:53 -- scripts/common.sh@355 -- # echo 1 00:05:01.568 17:20:53 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.568 17:20:53 -- scripts/common.sh@366 -- # decimal 2 00:05:01.568 17:20:53 -- scripts/common.sh@353 -- # local d=2 00:05:01.568 17:20:53 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.568 17:20:53 -- scripts/common.sh@355 -- # echo 2 00:05:01.568 17:20:53 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.568 17:20:53 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.568 17:20:53 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.568 17:20:53 -- scripts/common.sh@368 -- # return 0 00:05:01.568 17:20:53 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.568 17:20:53 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.568 --rc genhtml_branch_coverage=1 00:05:01.568 --rc genhtml_function_coverage=1 00:05:01.568 --rc genhtml_legend=1 00:05:01.568 --rc geninfo_all_blocks=1 00:05:01.568 --rc geninfo_unexecuted_blocks=1 00:05:01.568 00:05:01.568 ' 00:05:01.568 17:20:53 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.568 --rc genhtml_branch_coverage=1 00:05:01.568 --rc genhtml_function_coverage=1 00:05:01.568 --rc genhtml_legend=1 00:05:01.568 --rc geninfo_all_blocks=1 00:05:01.568 --rc geninfo_unexecuted_blocks=1 00:05:01.568 00:05:01.568 ' 00:05:01.568 17:20:53 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.568 --rc genhtml_branch_coverage=1 00:05:01.568 --rc genhtml_function_coverage=1 00:05:01.568 --rc genhtml_legend=1 00:05:01.568 --rc geninfo_all_blocks=1 00:05:01.569 --rc geninfo_unexecuted_blocks=1 00:05:01.569 00:05:01.569 ' 00:05:01.569 17:20:53 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.569 --rc genhtml_branch_coverage=1 00:05:01.569 --rc genhtml_function_coverage=1 00:05:01.569 --rc genhtml_legend=1 00:05:01.569 --rc geninfo_all_blocks=1 00:05:01.569 --rc geninfo_unexecuted_blocks=1 00:05:01.569 00:05:01.569 ' 00:05:01.569 17:20:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.569 17:20:53 -- nvmf/common.sh@7 -- # uname -s 00:05:01.569 17:20:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.569 17:20:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.569 17:20:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.569 17:20:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.569 17:20:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.569 17:20:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.569 17:20:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.569 17:20:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.569 17:20:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.569 17:20:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.569 17:20:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:01.569 17:20:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:01.569 17:20:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.569 17:20:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.569 17:20:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:01.569 17:20:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.569 17:20:53 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.569 17:20:53 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.569 17:20:53 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.569 17:20:53 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.569 17:20:53 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.569 17:20:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.569 17:20:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.569 17:20:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.569 17:20:53 -- paths/export.sh@5 -- # export PATH 00:05:01.569 17:20:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.569 17:20:53 -- nvmf/common.sh@51 -- # : 0 00:05:01.569 17:20:53 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.569 17:20:53 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.569 17:20:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.569 17:20:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.569 17:20:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.569 17:20:53 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.569 17:20:53 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.569 17:20:53 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.569 17:20:53 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.569 17:20:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.569 17:20:53 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.569 17:20:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.569 17:20:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.569 17:20:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:01.569 17:20:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.569 17:20:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:01.569 17:20:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.569 17:20:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.569 17:20:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.569 17:20:53 -- spdk/autotest.sh@48 -- # udevadm_pid=1451812 00:05:01.569 17:20:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:01.569 17:20:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.569 17:20:53 -- pm/common@17 -- # local monitor 00:05:01.569 17:20:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.569 17:20:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.569 17:20:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.569 17:20:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.569 17:20:53 -- pm/common@21 -- # date +%s 00:05:01.569 17:20:53 -- pm/common@21 -- # date +%s 00:05:01.569 17:20:53 -- pm/common@25 -- # sleep 1 00:05:01.569 17:20:53 -- pm/common@21 -- # date +%s 00:05:01.569 17:20:53 -- pm/common@21 -- # date +%s 00:05:01.569 17:20:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733502053 00:05:01.569 17:20:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733502053 00:05:01.569 17:20:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733502053 00:05:01.569 17:20:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733502053 00:05:01.569 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733502053_collect-cpu-load.pm.log 00:05:01.569 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733502053_collect-vmstat.pm.log 00:05:01.569 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733502053_collect-cpu-temp.pm.log 00:05:01.569 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733502053_collect-bmc-pm.bmc.pm.log 00:05:02.513 17:20:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.513 17:20:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.513 17:20:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.513 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.513 17:20:54 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.513 17:20:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:02.513 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.775 17:20:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:02.775 17:20:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.775 17:20:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.775 17:20:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:02.775 17:20:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.775 17:20:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.775 17:20:54 -- common/autotest_common.sh@1457 -- # uname 00:05:02.775 17:20:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:02.775 17:20:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.775 17:20:54 -- common/autotest_common.sh@1477 -- # uname 00:05:02.775 17:20:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:02.775 17:20:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.775 17:20:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.775 lcov: LCOV version 1.15 00:05:02.775 17:20:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:29.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:29.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:33.602 17:21:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:33.602 17:21:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.602 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.602 17:21:25 -- spdk/autotest.sh@78 -- # rm -f 00:05:33.603 17:21:25 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.999 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:36.999 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:36.999 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:37.571 17:21:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:37.571 17:21:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:37.571 17:21:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:37.571 17:21:29 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:37.571 17:21:29 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:37.571 17:21:29 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:37.571 17:21:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:37.571 17:21:29 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:05:37.571 17:21:29 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:37.571 17:21:29 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:37.571 17:21:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:37.571 17:21:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:37.571 17:21:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:37.571 17:21:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:37.571 17:21:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.571 17:21:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.571 17:21:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:37.571 17:21:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:37.571 17:21:29 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:37.571 No valid GPT data, bailing 00:05:37.571 17:21:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:37.571 17:21:29 -- scripts/common.sh@394 -- # pt= 00:05:37.571 17:21:29 -- scripts/common.sh@395 -- # return 1 00:05:37.571 17:21:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:37.571 1+0 records in 00:05:37.571 1+0 records out 00:05:37.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00194005 s, 540 MB/s 00:05:37.571 17:21:29 -- spdk/autotest.sh@105 -- # sync 00:05:37.571 17:21:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:37.571 17:21:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:37.571 17:21:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:47.573 17:21:37 -- spdk/autotest.sh@111 -- # uname -s 00:05:47.573 17:21:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:47.573 17:21:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:47.573 17:21:37 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:49.483 Hugepages 00:05:49.483 node hugesize free / total 00:05:49.483 node0 1048576kB 0 / 0 00:05:49.483 node0 2048kB 0 / 0 00:05:49.483 node1 1048576kB 0 / 0 00:05:49.483 node1 2048kB 0 / 0 00:05:49.483 00:05:49.483 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:49.483 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:49.483 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:49.743 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:49.743 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:49.743 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:49.743 17:21:41 -- spdk/autotest.sh@117 -- # uname -s 00:05:49.743 17:21:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:49.743 17:21:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:49.743 17:21:41 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:53.045 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:53.045 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:53.045 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:53.045 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:53.045 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:53.305 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:55.219 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:55.480 17:21:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:56.423 17:21:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:56.423 17:21:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:56.423 17:21:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:56.423 17:21:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:56.423 17:21:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:56.423 17:21:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:56.423 17:21:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.423 17:21:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:56.423 17:21:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:56.423 17:21:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:56.423 17:21:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:56.423 17:21:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:00.632 Waiting for block devices as requested 00:06:00.632 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:00.632 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:00.632 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:00.632 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:00.633 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:00.633 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:00.633 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:00.633 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:00.633 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:00.894 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:00.894 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:00.894 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:01.154 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:01.154 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:01.154 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:01.414 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:01.414 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:01.675 17:21:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:01.675 17:21:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:06:01.675 17:21:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:01.675 17:21:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:01.675 17:21:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:01.675 17:21:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:01.675 17:21:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:06:01.675 17:21:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:01.675 17:21:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:01.675 17:21:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:01.675 17:21:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:01.675 17:21:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:01.675 17:21:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:01.675 17:21:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:01.675 17:21:53 -- common/autotest_common.sh@1543 -- # continue 00:06:01.675 17:21:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:01.675 17:21:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.675 17:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.675 17:21:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:01.675 17:21:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.675 17:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.675 17:21:53 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:05.903 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:05.903 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:05.903 17:21:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:05.903 17:21:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.903 17:21:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.903 17:21:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:05.903 17:21:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:05.903 17:21:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:05.903 17:21:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:05.903 17:21:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:05.903 17:21:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:05.903 17:21:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:05.903 17:21:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:05.903 17:21:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:05.903 17:21:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:05.903 17:21:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.903 17:21:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:05.903 17:21:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:05.903 17:21:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:05.903 17:21:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:05.903 17:21:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:05.903 17:21:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:05.903 17:21:57 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:06:05.903 17:21:57 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:05.903 17:21:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:05.903 17:21:57 -- common/autotest_common.sh@1572 -- # return 0 00:06:05.903 17:21:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:05.903 17:21:57 -- common/autotest_common.sh@1580 -- # return 0 00:06:05.903 17:21:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:05.903 17:21:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:05.903 17:21:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:05.903 17:21:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:05.903 17:21:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:05.903 17:21:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.903 17:21:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.903 17:21:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:05.903 17:21:57 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:05.903 17:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.903 17:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.903 17:21:57 -- common/autotest_common.sh@10 -- # set +x 00:06:06.164 ************************************ 00:06:06.164 START TEST env 00:06:06.164 ************************************ 00:06:06.164 17:21:57 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:06.164 * Looking for test storage... 00:06:06.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.164 17:21:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.164 17:21:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.164 17:21:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.164 17:21:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.164 17:21:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.164 17:21:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.164 17:21:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.164 17:21:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.164 17:21:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.164 17:21:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.164 17:21:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.164 17:21:58 env -- scripts/common.sh@344 -- # case "$op" in 00:06:06.164 17:21:58 env -- scripts/common.sh@345 -- # : 1 00:06:06.164 17:21:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.164 17:21:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.164 17:21:58 env -- scripts/common.sh@365 -- # decimal 1 00:06:06.164 17:21:58 env -- scripts/common.sh@353 -- # local d=1 00:06:06.164 17:21:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.164 17:21:58 env -- scripts/common.sh@355 -- # echo 1 00:06:06.164 17:21:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.164 17:21:58 env -- scripts/common.sh@366 -- # decimal 2 00:06:06.164 17:21:58 env -- scripts/common.sh@353 -- # local d=2 00:06:06.164 17:21:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.164 17:21:58 env -- scripts/common.sh@355 -- # echo 2 00:06:06.164 17:21:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.164 17:21:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.164 17:21:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.164 17:21:58 env -- scripts/common.sh@368 -- # return 0 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.164 --rc genhtml_branch_coverage=1 00:06:06.164 --rc genhtml_function_coverage=1 00:06:06.164 --rc genhtml_legend=1 00:06:06.164 --rc geninfo_all_blocks=1 00:06:06.164 --rc geninfo_unexecuted_blocks=1 00:06:06.164 00:06:06.164 ' 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.164 --rc genhtml_branch_coverage=1 00:06:06.164 --rc genhtml_function_coverage=1 00:06:06.164 --rc genhtml_legend=1 00:06:06.164 --rc geninfo_all_blocks=1 00:06:06.164 --rc geninfo_unexecuted_blocks=1 00:06:06.164 00:06:06.164 ' 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.164 --rc genhtml_branch_coverage=1 00:06:06.164 --rc genhtml_function_coverage=1 00:06:06.164 --rc genhtml_legend=1 00:06:06.164 --rc geninfo_all_blocks=1 00:06:06.164 --rc geninfo_unexecuted_blocks=1 00:06:06.164 00:06:06.164 ' 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.164 --rc genhtml_branch_coverage=1 00:06:06.164 --rc genhtml_function_coverage=1 00:06:06.164 --rc genhtml_legend=1 00:06:06.164 --rc geninfo_all_blocks=1 00:06:06.164 --rc geninfo_unexecuted_blocks=1 00:06:06.164 00:06:06.164 ' 00:06:06.164 17:21:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.164 17:21:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.164 17:21:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.164 ************************************ 00:06:06.164 START TEST env_memory 00:06:06.164 ************************************ 00:06:06.164 17:21:58 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:06.164 00:06:06.164 00:06:06.164 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.164 http://cunit.sourceforge.net/ 00:06:06.164 00:06:06.164 00:06:06.164 Suite: memory 00:06:06.425 Test: alloc and free memory map ...[2024-12-06 17:21:58.257557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:06.425 passed 00:06:06.425 Test: mem map translation ...[2024-12-06 17:21:58.283188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:06.425 [2024-12-06 17:21:58.283219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:06.425 [2024-12-06 17:21:58.283265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:06.425 [2024-12-06 17:21:58.283277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:06.425 passed 00:06:06.425 Test: mem map registration ...[2024-12-06 17:21:58.338516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:06.425 [2024-12-06 17:21:58.338554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:06.425 passed 00:06:06.425 Test: mem map adjacent registrations ...passed 00:06:06.425 00:06:06.425 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.425 suites 1 1 n/a 0 0 00:06:06.425 tests 4 4 4 0 0 00:06:06.425 asserts 152 152 152 0 n/a 00:06:06.425 00:06:06.425 Elapsed time = 0.193 seconds 00:06:06.425 00:06:06.425 real 0m0.203s 00:06:06.425 user 0m0.193s 00:06:06.425 sys 0m0.009s 00:06:06.425 17:21:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.425 17:21:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:06.425 ************************************ 00:06:06.425 END TEST env_memory 00:06:06.425 ************************************ 00:06:06.425 17:21:58 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:06.425 17:21:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.425 17:21:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.425 17:21:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.687 ************************************ 00:06:06.687 START TEST env_vtophys 00:06:06.687 ************************************ 00:06:06.687 17:21:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:06.687 EAL: lib.eal log level changed from notice to debug 00:06:06.687 EAL: Detected lcore 0 as core 0 on socket 0 00:06:06.687 EAL: Detected lcore 1 as core 1 on socket 0 00:06:06.687 EAL: Detected lcore 2 as core 2 on socket 0 00:06:06.687 EAL: Detected lcore 3 as core 3 on socket 0 00:06:06.687 EAL: Detected lcore 4 as core 4 on socket 0 00:06:06.687 EAL: Detected lcore 5 as core 5 on socket 0 00:06:06.687 EAL: Detected lcore 6 as core 6 on socket 0 00:06:06.687 EAL: Detected lcore 7 as core 7 on socket 0 00:06:06.687 EAL: Detected lcore 8 as core 8 on socket 0 00:06:06.687 EAL: Detected lcore 9 as core 9 on socket 0 00:06:06.687 EAL: Detected lcore 10 as core 10 on socket 0 00:06:06.688 EAL: Detected lcore 11 as core 11 on socket 0 00:06:06.688 EAL: Detected lcore 12 as core 12 on socket 0 00:06:06.688 EAL: Detected lcore 13 as core 13 on socket 0 00:06:06.688 EAL: Detected lcore 14 as core 14 on socket 0 00:06:06.688 EAL: Detected lcore 15 as core 15 on socket 0 00:06:06.688 EAL: Detected lcore 16 as core 16 on socket 0 00:06:06.688 EAL: Detected lcore 17 as core 17 on socket 0 00:06:06.688 EAL: Detected lcore 18 as core 18 on socket 0 00:06:06.688 EAL: Detected lcore 19 as core 19 on socket 0 00:06:06.688 EAL: Detected lcore 20 as core 20 on socket 0 00:06:06.688 EAL: Detected lcore 21 as core 21 on socket 0 00:06:06.688 EAL: Detected lcore 22 as core 22 on socket 0 00:06:06.688 EAL: Detected lcore 23 as core 23 on socket 0 00:06:06.688 EAL: Detected lcore 24 as core 24 on socket 0 00:06:06.688 EAL: Detected lcore 25 as core 25 on socket 0 00:06:06.688 EAL: Detected lcore 26 as core 26 on socket 0 00:06:06.688 EAL: Detected lcore 27 as core 27 on socket 0 00:06:06.688 EAL: Detected lcore 28 as core 28 on socket 0 00:06:06.688 EAL: Detected lcore 29 as core 29 on socket 0 00:06:06.688 EAL: Detected lcore 30 as core 30 on socket 0 00:06:06.688 EAL: Detected lcore 31 as core 31 on socket 0 00:06:06.688 EAL: Detected lcore 32 as core 32 on socket 0 00:06:06.688 EAL: Detected lcore 33 as core 33 on socket 0 00:06:06.688 EAL: Detected lcore 34 as core 34 on socket 0 00:06:06.688 EAL: Detected lcore 35 as core 35 on socket 0 00:06:06.688 EAL: Detected lcore 36 as core 0 on socket 1 00:06:06.688 EAL: Detected lcore 37 as core 1 on socket 1 00:06:06.688 EAL: Detected lcore 38 as core 2 on socket 1 00:06:06.688 EAL: Detected lcore 39 as core 3 on socket 1 00:06:06.688 EAL: Detected lcore 40 as core 4 on socket 1 00:06:06.688 EAL: Detected lcore 41 as core 5 on socket 1 00:06:06.688 EAL: Detected lcore 42 as core 6 on socket 1 00:06:06.688 EAL: Detected lcore 43 as core 7 on socket 1 00:06:06.688 EAL: Detected lcore 44 as core 8 on socket 1 00:06:06.688 EAL: Detected lcore 45 as core 9 on socket 1 00:06:06.688 EAL: Detected lcore 46 as core 10 on socket 1 00:06:06.688 EAL: Detected lcore 47 as core 11 on socket 1 00:06:06.688 EAL: Detected lcore 48 as core 12 on socket 1 00:06:06.688 EAL: Detected lcore 49 as core 13 on socket 1 00:06:06.688 EAL: Detected lcore 50 as core 14 on socket 1 00:06:06.688 EAL: Detected lcore 51 as core 15 on socket 1 00:06:06.688 EAL: Detected lcore 52 as core 16 on socket 1 00:06:06.688 EAL: Detected lcore 53 as core 17 on socket 1 00:06:06.688 EAL: Detected lcore 54 as core 18 on socket 1 00:06:06.688 EAL: Detected lcore 55 as core 19 on socket 1 00:06:06.688 EAL: Detected lcore 56 as core 20 on socket 1 00:06:06.688 EAL: Detected lcore 57 as core 21 on socket 1 00:06:06.688 EAL: Detected lcore 58 as core 22 on socket 1 00:06:06.688 EAL: Detected lcore 59 as core 23 on socket 1 00:06:06.688 EAL: Detected lcore 60 as core 24 on socket 1 00:06:06.688 EAL: Detected lcore 61 as core 25 on socket 1 00:06:06.688 EAL: Detected lcore 62 as core 26 on socket 1 00:06:06.688 EAL: Detected lcore 63 as core 27 on socket 1 00:06:06.688 EAL: Detected lcore 64 as core 28 on socket 1 00:06:06.688 EAL: Detected lcore 65 as core 29 on socket 1 00:06:06.688 EAL: Detected lcore 66 as core 30 on socket 1 00:06:06.688 EAL: Detected lcore 67 as core 31 on socket 1 00:06:06.688 EAL: Detected lcore 68 as core 32 on socket 1 00:06:06.688 EAL: Detected lcore 69 as core 33 on socket 1 00:06:06.688 EAL: Detected lcore 70 as core 34 on socket 1 00:06:06.688 EAL: Detected lcore 71 as core 35 on socket 1 00:06:06.688 EAL: Detected lcore 72 as core 0 on socket 0 00:06:06.688 EAL: Detected lcore 73 as core 1 on socket 0 00:06:06.688 EAL: Detected lcore 74 as core 2 on socket 0 00:06:06.688 EAL: Detected lcore 75 as core 3 on socket 0 00:06:06.688 EAL: Detected lcore 76 as core 4 on socket 0 00:06:06.688 EAL: Detected lcore 77 as core 5 on socket 0 00:06:06.688 EAL: Detected lcore 78 as core 6 on socket 0 00:06:06.688 EAL: Detected lcore 79 as core 7 on socket 0 00:06:06.688 EAL: Detected lcore 80 as core 8 on socket 0 00:06:06.688 EAL: Detected lcore 81 as core 9 on socket 0 00:06:06.688 EAL: Detected lcore 82 as core 10 on socket 0 00:06:06.688 EAL: Detected lcore 83 as core 11 on socket 0 00:06:06.688 EAL: Detected lcore 84 as core 12 on socket 0 00:06:06.688 EAL: Detected lcore 85 as core 13 on socket 0 00:06:06.688 EAL: Detected lcore 86 as core 14 on socket 0 00:06:06.688 EAL: Detected lcore 87 as core 15 on socket 0 00:06:06.688 EAL: Detected lcore 88 as core 16 on socket 0 00:06:06.688 EAL: Detected lcore 89 as core 17 on socket 0 00:06:06.688 EAL: Detected lcore 90 as core 18 on socket 0 00:06:06.688 EAL: Detected lcore 91 as core 19 on socket 0 00:06:06.688 EAL: Detected lcore 92 as core 20 on socket 0 00:06:06.688 EAL: Detected lcore 93 as core 21 on socket 0 00:06:06.688 EAL: Detected lcore 94 as core 22 on socket 0 00:06:06.688 EAL: Detected lcore 95 as core 23 on socket 0 00:06:06.688 EAL: Detected lcore 96 as core 24 on socket 0 00:06:06.688 EAL: Detected lcore 97 as core 25 on socket 0 00:06:06.688 EAL: Detected lcore 98 as core 26 on socket 0 00:06:06.688 EAL: Detected lcore 99 as core 27 on socket 0 00:06:06.688 EAL: Detected lcore 100 as core 28 on socket 0 00:06:06.688 EAL: Detected lcore 101 as core 29 on socket 0 00:06:06.688 EAL: Detected lcore 102 as core 30 on socket 0 00:06:06.688 EAL: Detected lcore 103 as core 31 on socket 0 00:06:06.688 EAL: Detected lcore 104 as core 32 on socket 0 00:06:06.688 EAL: Detected lcore 105 as core 33 on socket 0 00:06:06.688 EAL: Detected lcore 106 as core 34 on socket 0 00:06:06.688 EAL: Detected lcore 107 as core 35 on socket 0 00:06:06.688 EAL: Detected lcore 108 as core 0 on socket 1 00:06:06.688 EAL: Detected lcore 109 as core 1 on socket 1 00:06:06.688 EAL: Detected lcore 110 as core 2 on socket 1 00:06:06.688 EAL: Detected lcore 111 as core 3 on socket 1 00:06:06.688 EAL: Detected lcore 112 as core 4 on socket 1 00:06:06.688 EAL: Detected lcore 113 as core 5 on socket 1 00:06:06.688 EAL: Detected lcore 114 as core 6 on socket 1 00:06:06.688 EAL: Detected lcore 115 as core 7 on socket 1 00:06:06.688 EAL: Detected lcore 116 as core 8 on socket 1 00:06:06.688 EAL: Detected lcore 117 as core 9 on socket 1 00:06:06.688 EAL: Detected lcore 118 as core 10 on socket 1 00:06:06.688 EAL: Detected lcore 119 as core 11 on socket 1 00:06:06.688 EAL: Detected lcore 120 as core 12 on socket 1 00:06:06.688 EAL: Detected lcore 121 as core 13 on socket 1 00:06:06.688 EAL: Detected lcore 122 as core 14 on socket 1 00:06:06.688 EAL: Detected lcore 123 as core 15 on socket 1 00:06:06.688 EAL: Detected lcore 124 as core 16 on socket 1 00:06:06.688 EAL: Detected lcore 125 as core 17 on socket 1 00:06:06.688 EAL: Detected lcore 126 as core 18 on socket 1 00:06:06.688 EAL: Detected lcore 127 as core 19 on socket 1 00:06:06.688 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:06.688 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:06.688 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:06.688 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:06.688 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:06.688 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:06.688 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:06.688 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:06.688 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:06.688 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:06.688 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:06.688 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:06.688 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:06.688 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:06.688 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:06.688 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:06.688 EAL: Maximum logical cores by configuration: 128 00:06:06.688 EAL: Detected CPU lcores: 128 00:06:06.689 EAL: Detected NUMA nodes: 2 00:06:06.689 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:06.689 EAL: Detected shared linkage of DPDK 00:06:06.689 EAL: No shared files mode enabled, IPC will be disabled 00:06:06.689 EAL: Bus pci wants IOVA as 'DC' 00:06:06.689 EAL: Buses did not request a specific IOVA mode. 00:06:06.689 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:06.689 EAL: Selected IOVA mode 'VA' 00:06:06.689 EAL: Probing VFIO support... 00:06:06.689 EAL: IOMMU type 1 (Type 1) is supported 00:06:06.689 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:06.689 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:06.689 EAL: VFIO support initialized 00:06:06.689 EAL: Ask a virtual area of 0x2e000 bytes 00:06:06.689 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:06.689 EAL: Setting up physically contiguous memory... 00:06:06.689 EAL: Setting maximum number of open files to 524288 00:06:06.689 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:06.689 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:06.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:06.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:06.689 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.689 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:06.689 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.689 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.689 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:06.689 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:06.689 EAL: Hugepages will be freed exactly as allocated. 00:06:06.689 EAL: No shared files mode enabled, IPC is disabled 00:06:06.689 EAL: No shared files mode enabled, IPC is disabled 00:06:06.689 EAL: TSC frequency is ~2400000 KHz 00:06:06.689 EAL: Main lcore 0 is ready (tid=7f5a93e1fa00;cpuset=[0]) 00:06:06.689 EAL: Trying to obtain current memory policy. 00:06:06.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.689 EAL: Restoring previous memory policy: 0 00:06:06.689 EAL: request: mp_malloc_sync 00:06:06.689 EAL: No shared files mode enabled, IPC is disabled 00:06:06.689 EAL: Heap on socket 0 was expanded by 2MB 00:06:06.689 EAL: No shared files mode enabled, IPC is disabled 00:06:06.689 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:06.689 EAL: Mem event callback 'spdk:(nil)' registered 00:06:06.689 00:06:06.689 00:06:06.689 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.689 http://cunit.sourceforge.net/ 00:06:06.689 00:06:06.689 00:06:06.689 Suite: components_suite 00:06:06.689 Test: vtophys_malloc_test ...passed 00:06:06.689 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:06.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.689 EAL: Restoring previous memory policy: 4 00:06:06.689 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.689 EAL: request: mp_malloc_sync 00:06:06.689 EAL: No shared files mode enabled, IPC is disabled 00:06:06.689 EAL: Heap on socket 0 was expanded by 4MB 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was shrunk by 4MB 00:06:06.704 EAL: Trying to obtain current memory policy. 00:06:06.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.704 EAL: Restoring previous memory policy: 4 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was expanded by 6MB 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was shrunk by 6MB 00:06:06.704 EAL: Trying to obtain current memory policy. 00:06:06.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.704 EAL: Restoring previous memory policy: 4 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was expanded by 10MB 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was shrunk by 10MB 00:06:06.704 EAL: Trying to obtain current memory policy. 00:06:06.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.704 EAL: Restoring previous memory policy: 4 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was expanded by 18MB 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was shrunk by 18MB 00:06:06.704 EAL: Trying to obtain current memory policy. 00:06:06.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.704 EAL: Restoring previous memory policy: 4 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was expanded by 34MB 00:06:06.704 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.704 EAL: request: mp_malloc_sync 00:06:06.704 EAL: No shared files mode enabled, IPC is disabled 00:06:06.704 EAL: Heap on socket 0 was shrunk by 34MB 00:06:06.704 EAL: Trying to obtain current memory policy. 00:06:06.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.704 EAL: Restoring previous memory policy: 4 00:06:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.705 EAL: request: mp_malloc_sync 00:06:06.705 EAL: No shared files mode enabled, IPC is disabled 00:06:06.705 EAL: Heap on socket 0 was expanded by 66MB 00:06:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.705 EAL: request: mp_malloc_sync 00:06:06.705 EAL: No shared files mode enabled, IPC is disabled 00:06:06.705 EAL: Heap on socket 0 was shrunk by 66MB 00:06:06.705 EAL: Trying to obtain current memory policy. 00:06:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.705 EAL: Restoring previous memory policy: 4 00:06:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.705 EAL: request: mp_malloc_sync 00:06:06.705 EAL: No shared files mode enabled, IPC is disabled 00:06:06.705 EAL: Heap on socket 0 was expanded by 130MB 00:06:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.705 EAL: request: mp_malloc_sync 00:06:06.705 EAL: No shared files mode enabled, IPC is disabled 00:06:06.705 EAL: Heap on socket 0 was shrunk by 130MB 00:06:06.705 EAL: Trying to obtain current memory policy. 00:06:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.705 EAL: Restoring previous memory policy: 4 00:06:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.705 EAL: request: mp_malloc_sync 00:06:06.705 EAL: No shared files mode enabled, IPC is disabled 00:06:06.705 EAL: Heap on socket 0 was expanded by 258MB 00:06:06.964 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.964 EAL: request: mp_malloc_sync 00:06:06.964 EAL: No shared files mode enabled, IPC is disabled 00:06:06.964 EAL: Heap on socket 0 was shrunk by 258MB 00:06:06.964 EAL: Trying to obtain current memory policy. 00:06:06.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.964 EAL: Restoring previous memory policy: 4 00:06:06.964 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.964 EAL: request: mp_malloc_sync 00:06:06.964 EAL: No shared files mode enabled, IPC is disabled 00:06:06.964 EAL: Heap on socket 0 was expanded by 514MB 00:06:06.964 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.964 EAL: request: mp_malloc_sync 00:06:06.964 EAL: No shared files mode enabled, IPC is disabled 00:06:06.964 EAL: Heap on socket 0 was shrunk by 514MB 00:06:06.964 EAL: Trying to obtain current memory policy. 00:06:06.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.224 EAL: Restoring previous memory policy: 4 00:06:07.224 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.224 EAL: request: mp_malloc_sync 00:06:07.224 EAL: No shared files mode enabled, IPC is disabled 00:06:07.224 EAL: Heap on socket 0 was expanded by 1026MB 00:06:07.224 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.484 EAL: request: mp_malloc_sync 00:06:07.484 EAL: No shared files mode enabled, IPC is disabled 00:06:07.484 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:07.484 passed 00:06:07.484 00:06:07.484 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.484 suites 1 1 n/a 0 0 00:06:07.484 tests 2 2 2 0 0 00:06:07.484 asserts 497 497 497 0 n/a 00:06:07.484 00:06:07.484 Elapsed time = 0.690 seconds 00:06:07.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.484 EAL: request: mp_malloc_sync 00:06:07.484 EAL: No shared files mode enabled, IPC is disabled 00:06:07.484 EAL: Heap on socket 0 was shrunk by 2MB 00:06:07.484 EAL: No shared files mode enabled, IPC is disabled 00:06:07.484 EAL: No shared files mode enabled, IPC is disabled 00:06:07.484 EAL: No shared files mode enabled, IPC is disabled 00:06:07.484 00:06:07.484 real 0m0.850s 00:06:07.484 user 0m0.447s 00:06:07.484 sys 0m0.368s 00:06:07.484 17:21:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.484 17:21:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 ************************************ 00:06:07.484 END TEST env_vtophys 00:06:07.484 ************************************ 00:06:07.484 17:21:59 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:07.484 17:21:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.484 17:21:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.484 17:21:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 ************************************ 00:06:07.484 START TEST env_pci 00:06:07.484 ************************************ 00:06:07.484 17:21:59 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:07.484 00:06:07.484 00:06:07.484 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.484 http://cunit.sourceforge.net/ 00:06:07.484 00:06:07.484 00:06:07.484 Suite: pci 00:06:07.484 Test: pci_hook ...[2024-12-06 17:21:59.438385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1471803 has claimed it 00:06:07.484 EAL: Cannot find device (10000:00:01.0) 00:06:07.484 EAL: Failed to attach device on primary process 00:06:07.484 passed 00:06:07.484 00:06:07.484 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.484 suites 1 1 n/a 0 0 00:06:07.484 tests 1 1 1 0 0 00:06:07.484 asserts 25 25 25 0 n/a 00:06:07.484 00:06:07.484 Elapsed time = 0.031 seconds 00:06:07.484 00:06:07.484 real 0m0.052s 00:06:07.484 user 0m0.017s 00:06:07.484 sys 0m0.035s 00:06:07.484 17:21:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.484 17:21:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:07.484 ************************************ 00:06:07.484 END TEST env_pci 00:06:07.484 ************************************ 00:06:07.484 17:21:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:07.484 17:21:59 env -- env/env.sh@15 -- # uname 00:06:07.484 17:21:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:07.484 17:21:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:07.484 17:21:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:07.484 17:21:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:07.484 17:21:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.484 17:21:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.745 ************************************ 00:06:07.745 START TEST env_dpdk_post_init 00:06:07.745 ************************************ 00:06:07.745 17:21:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:07.745 EAL: Detected CPU lcores: 128 00:06:07.745 EAL: Detected NUMA nodes: 2 00:06:07.745 EAL: Detected shared linkage of DPDK 00:06:07.745 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:07.745 EAL: Selected IOVA mode 'VA' 00:06:07.745 EAL: VFIO support initialized 00:06:07.745 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:07.745 EAL: Using IOMMU type 1 (Type 1) 00:06:08.007 EAL: Ignore mapping IO port bar(1) 00:06:08.007 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:08.007 EAL: Ignore mapping IO port bar(1) 00:06:08.267 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:08.268 EAL: Ignore mapping IO port bar(1) 00:06:08.528 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:08.528 EAL: Ignore mapping IO port bar(1) 00:06:08.788 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:08.788 EAL: Ignore mapping IO port bar(1) 00:06:08.788 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:09.048 EAL: Ignore mapping IO port bar(1) 00:06:09.048 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:09.309 EAL: Ignore mapping IO port bar(1) 00:06:09.309 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:09.569 EAL: Ignore mapping IO port bar(1) 00:06:09.569 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:09.830 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:09.830 EAL: Ignore mapping IO port bar(1) 00:06:10.091 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:10.091 EAL: Ignore mapping IO port bar(1) 00:06:10.352 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:10.352 EAL: Ignore mapping IO port bar(1) 00:06:10.352 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:10.613 EAL: Ignore mapping IO port bar(1) 00:06:10.613 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:10.874 EAL: Ignore mapping IO port bar(1) 00:06:10.874 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:11.135 EAL: Ignore mapping IO port bar(1) 00:06:11.135 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:11.135 EAL: Ignore mapping IO port bar(1) 00:06:11.397 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:11.397 EAL: Ignore mapping IO port bar(1) 00:06:11.659 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:11.659 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:11.659 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:11.659 Starting DPDK initialization... 00:06:11.659 Starting SPDK post initialization... 00:06:11.659 SPDK NVMe probe 00:06:11.659 Attaching to 0000:65:00.0 00:06:11.659 Attached to 0000:65:00.0 00:06:11.659 Cleaning up... 00:06:13.577 00:06:13.577 real 0m5.745s 00:06:13.577 user 0m0.106s 00:06:13.577 sys 0m0.196s 00:06:13.577 17:22:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.577 17:22:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.577 ************************************ 00:06:13.577 END TEST env_dpdk_post_init 00:06:13.577 ************************************ 00:06:13.577 17:22:05 env -- env/env.sh@26 -- # uname 00:06:13.577 17:22:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:13.577 17:22:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.577 17:22:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.577 17:22:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.577 17:22:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.577 ************************************ 00:06:13.577 START TEST env_mem_callbacks 00:06:13.577 ************************************ 00:06:13.577 17:22:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.577 EAL: Detected CPU lcores: 128 00:06:13.577 EAL: Detected NUMA nodes: 2 00:06:13.577 EAL: Detected shared linkage of DPDK 00:06:13.577 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.577 EAL: Selected IOVA mode 'VA' 00:06:13.577 EAL: VFIO support initialized 00:06:13.577 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.577 00:06:13.577 00:06:13.577 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.577 http://cunit.sourceforge.net/ 00:06:13.577 00:06:13.577 00:06:13.577 Suite: memory 00:06:13.577 Test: test ... 00:06:13.577 register 0x200000200000 2097152 00:06:13.577 malloc 3145728 00:06:13.577 register 0x200000400000 4194304 00:06:13.577 buf 0x200000500000 len 3145728 PASSED 00:06:13.577 malloc 64 00:06:13.577 buf 0x2000004fff40 len 64 PASSED 00:06:13.577 malloc 4194304 00:06:13.577 register 0x200000800000 6291456 00:06:13.577 buf 0x200000a00000 len 4194304 PASSED 00:06:13.577 free 0x200000500000 3145728 00:06:13.577 free 0x2000004fff40 64 00:06:13.577 unregister 0x200000400000 4194304 PASSED 00:06:13.577 free 0x200000a00000 4194304 00:06:13.577 unregister 0x200000800000 6291456 PASSED 00:06:13.577 malloc 8388608 00:06:13.577 register 0x200000400000 10485760 00:06:13.577 buf 0x200000600000 len 8388608 PASSED 00:06:13.577 free 0x200000600000 8388608 00:06:13.577 unregister 0x200000400000 10485760 PASSED 00:06:13.577 passed 00:06:13.577 00:06:13.577 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.577 suites 1 1 n/a 0 0 00:06:13.577 tests 1 1 1 0 0 00:06:13.577 asserts 15 15 15 0 n/a 00:06:13.577 00:06:13.577 Elapsed time = 0.010 seconds 00:06:13.577 00:06:13.577 real 0m0.070s 00:06:13.577 user 0m0.030s 00:06:13.577 sys 0m0.040s 00:06:13.577 17:22:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.577 17:22:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:13.577 ************************************ 00:06:13.577 END TEST env_mem_callbacks 00:06:13.577 ************************************ 00:06:13.577 00:06:13.577 real 0m7.536s 00:06:13.577 user 0m1.053s 00:06:13.577 sys 0m1.037s 00:06:13.577 17:22:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.577 17:22:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.577 ************************************ 00:06:13.577 END TEST env 00:06:13.577 ************************************ 00:06:13.577 17:22:05 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.577 17:22:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.577 17:22:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.577 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:13.577 ************************************ 00:06:13.577 START TEST rpc 00:06:13.577 ************************************ 00:06:13.577 17:22:05 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.837 * Looking for test storage... 00:06:13.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:13.837 17:22:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.837 17:22:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.837 17:22:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.837 17:22:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.837 17:22:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.837 17:22:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.837 17:22:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.837 17:22:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.837 17:22:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.837 17:22:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.837 17:22:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.837 17:22:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.838 17:22:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.838 17:22:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.838 17:22:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.838 17:22:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.838 17:22:05 rpc -- scripts/common.sh@345 -- # : 1 00:06:13.838 17:22:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.838 17:22:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.838 17:22:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.838 17:22:05 rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.838 17:22:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.838 17:22:05 rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.838 17:22:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.838 17:22:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.838 17:22:05 rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.838 17:22:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.838 17:22:05 rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.838 17:22:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.838 17:22:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.838 17:22:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.838 17:22:05 rpc -- scripts/common.sh@368 -- # return 0 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.838 --rc genhtml_branch_coverage=1 00:06:13.838 --rc genhtml_function_coverage=1 00:06:13.838 --rc genhtml_legend=1 00:06:13.838 --rc geninfo_all_blocks=1 00:06:13.838 --rc geninfo_unexecuted_blocks=1 00:06:13.838 00:06:13.838 ' 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.838 --rc genhtml_branch_coverage=1 00:06:13.838 --rc genhtml_function_coverage=1 00:06:13.838 --rc genhtml_legend=1 00:06:13.838 --rc geninfo_all_blocks=1 00:06:13.838 --rc geninfo_unexecuted_blocks=1 00:06:13.838 00:06:13.838 ' 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.838 --rc genhtml_branch_coverage=1 00:06:13.838 --rc genhtml_function_coverage=1 00:06:13.838 --rc genhtml_legend=1 00:06:13.838 --rc geninfo_all_blocks=1 00:06:13.838 --rc geninfo_unexecuted_blocks=1 00:06:13.838 00:06:13.838 ' 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.838 --rc genhtml_branch_coverage=1 00:06:13.838 --rc genhtml_function_coverage=1 00:06:13.838 --rc genhtml_legend=1 00:06:13.838 --rc geninfo_all_blocks=1 00:06:13.838 --rc geninfo_unexecuted_blocks=1 00:06:13.838 00:06:13.838 ' 00:06:13.838 17:22:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1473110 00:06:13.838 17:22:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.838 17:22:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:13.838 17:22:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1473110 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 1473110 ']' 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.838 17:22:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.838 [2024-12-06 17:22:05.850614] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:13.838 [2024-12-06 17:22:05.850695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473110 ] 00:06:14.099 [2024-12-06 17:22:05.940797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.099 [2024-12-06 17:22:05.992729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:14.099 [2024-12-06 17:22:05.992787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1473110' to capture a snapshot of events at runtime. 00:06:14.099 [2024-12-06 17:22:05.992797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.099 [2024-12-06 17:22:05.992804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.099 [2024-12-06 17:22:05.992810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1473110 for offline analysis/debug. 00:06:14.099 [2024-12-06 17:22:05.993558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.670 17:22:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.670 17:22:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.670 17:22:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.670 17:22:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.670 17:22:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:14.670 17:22:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:14.670 17:22:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.670 17:22:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.670 17:22:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.670 ************************************ 00:06:14.670 START TEST rpc_integrity 00:06:14.670 ************************************ 00:06:14.670 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:14.670 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:14.670 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.670 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.670 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.670 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:14.670 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:14.932 { 00:06:14.932 "name": "Malloc0", 00:06:14.932 "aliases": [ 00:06:14.932 "2cd13a6e-07c7-47ca-b118-b0104e91adec" 00:06:14.932 ], 00:06:14.932 "product_name": "Malloc disk", 00:06:14.932 "block_size": 512, 00:06:14.932 "num_blocks": 16384, 00:06:14.932 "uuid": "2cd13a6e-07c7-47ca-b118-b0104e91adec", 00:06:14.932 "assigned_rate_limits": { 00:06:14.932 "rw_ios_per_sec": 0, 00:06:14.932 "rw_mbytes_per_sec": 0, 00:06:14.932 "r_mbytes_per_sec": 0, 00:06:14.932 "w_mbytes_per_sec": 0 00:06:14.932 }, 00:06:14.932 "claimed": false, 00:06:14.932 "zoned": false, 00:06:14.932 "supported_io_types": { 00:06:14.932 "read": true, 00:06:14.932 "write": true, 00:06:14.932 "unmap": true, 00:06:14.932 "flush": true, 00:06:14.932 "reset": true, 00:06:14.932 "nvme_admin": false, 00:06:14.932 "nvme_io": false, 00:06:14.932 "nvme_io_md": false, 00:06:14.932 "write_zeroes": true, 00:06:14.932 "zcopy": true, 00:06:14.932 "get_zone_info": false, 00:06:14.932 "zone_management": false, 00:06:14.932 "zone_append": false, 00:06:14.932 "compare": false, 00:06:14.932 "compare_and_write": false, 00:06:14.932 "abort": true, 00:06:14.932 "seek_hole": false, 00:06:14.932 "seek_data": false, 00:06:14.932 "copy": true, 00:06:14.932 "nvme_iov_md": false 00:06:14.932 }, 00:06:14.932 "memory_domains": [ 00:06:14.932 { 00:06:14.932 "dma_device_id": "system", 00:06:14.932 "dma_device_type": 1 00:06:14.932 }, 00:06:14.932 { 00:06:14.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.932 "dma_device_type": 2 00:06:14.932 } 00:06:14.932 ], 00:06:14.932 "driver_specific": {} 00:06:14.932 } 00:06:14.932 ]' 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 [2024-12-06 17:22:06.829863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:14.932 [2024-12-06 17:22:06.829911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:14.932 [2024-12-06 17:22:06.829926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12fbf80 00:06:14.932 [2024-12-06 17:22:06.829935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:14.932 [2024-12-06 17:22:06.831532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:14.932 [2024-12-06 17:22:06.831569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:14.932 Passthru0 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:14.932 { 00:06:14.932 "name": "Malloc0", 00:06:14.932 "aliases": [ 00:06:14.932 "2cd13a6e-07c7-47ca-b118-b0104e91adec" 00:06:14.932 ], 00:06:14.932 "product_name": "Malloc disk", 00:06:14.932 "block_size": 512, 00:06:14.932 "num_blocks": 16384, 00:06:14.932 "uuid": "2cd13a6e-07c7-47ca-b118-b0104e91adec", 00:06:14.932 "assigned_rate_limits": { 00:06:14.932 "rw_ios_per_sec": 0, 00:06:14.932 "rw_mbytes_per_sec": 0, 00:06:14.932 "r_mbytes_per_sec": 0, 00:06:14.932 "w_mbytes_per_sec": 0 00:06:14.932 }, 00:06:14.932 "claimed": true, 00:06:14.932 "claim_type": "exclusive_write", 00:06:14.932 "zoned": false, 00:06:14.932 "supported_io_types": { 00:06:14.932 "read": true, 00:06:14.932 "write": true, 00:06:14.932 "unmap": true, 00:06:14.932 "flush": true, 00:06:14.932 "reset": true, 00:06:14.932 "nvme_admin": false, 00:06:14.932 "nvme_io": false, 00:06:14.932 "nvme_io_md": false, 00:06:14.932 "write_zeroes": true, 00:06:14.932 "zcopy": true, 00:06:14.932 "get_zone_info": false, 00:06:14.932 "zone_management": false, 00:06:14.932 "zone_append": false, 00:06:14.932 "compare": false, 00:06:14.932 "compare_and_write": false, 00:06:14.932 "abort": true, 00:06:14.932 "seek_hole": false, 00:06:14.932 "seek_data": false, 00:06:14.932 "copy": true, 00:06:14.932 "nvme_iov_md": false 00:06:14.932 }, 00:06:14.932 "memory_domains": [ 00:06:14.932 { 00:06:14.932 "dma_device_id": "system", 00:06:14.932 "dma_device_type": 1 00:06:14.932 }, 00:06:14.932 { 00:06:14.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.932 "dma_device_type": 2 00:06:14.932 } 00:06:14.932 ], 00:06:14.932 "driver_specific": {} 00:06:14.932 }, 00:06:14.932 { 00:06:14.932 "name": "Passthru0", 00:06:14.932 "aliases": [ 00:06:14.932 "0cc035b6-e76b-543d-a2c0-2e8f68553da1" 00:06:14.932 ], 00:06:14.932 "product_name": "passthru", 00:06:14.932 "block_size": 512, 00:06:14.932 "num_blocks": 16384, 00:06:14.932 "uuid": "0cc035b6-e76b-543d-a2c0-2e8f68553da1", 00:06:14.932 "assigned_rate_limits": { 00:06:14.932 "rw_ios_per_sec": 0, 00:06:14.932 "rw_mbytes_per_sec": 0, 00:06:14.932 "r_mbytes_per_sec": 0, 00:06:14.932 "w_mbytes_per_sec": 0 00:06:14.932 }, 00:06:14.932 "claimed": false, 00:06:14.932 "zoned": false, 00:06:14.932 "supported_io_types": { 00:06:14.932 "read": true, 00:06:14.932 "write": true, 00:06:14.932 "unmap": true, 00:06:14.932 "flush": true, 00:06:14.932 "reset": true, 00:06:14.932 "nvme_admin": false, 00:06:14.932 "nvme_io": false, 00:06:14.932 "nvme_io_md": false, 00:06:14.932 "write_zeroes": true, 00:06:14.932 "zcopy": true, 00:06:14.932 "get_zone_info": false, 00:06:14.932 "zone_management": false, 00:06:14.932 "zone_append": false, 00:06:14.932 "compare": false, 00:06:14.932 "compare_and_write": false, 00:06:14.932 "abort": true, 00:06:14.932 "seek_hole": false, 00:06:14.932 "seek_data": false, 00:06:14.932 "copy": true, 00:06:14.932 "nvme_iov_md": false 00:06:14.932 }, 00:06:14.932 "memory_domains": [ 00:06:14.932 { 00:06:14.932 "dma_device_id": "system", 00:06:14.932 "dma_device_type": 1 00:06:14.932 }, 00:06:14.932 { 00:06:14.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.932 "dma_device_type": 2 00:06:14.932 } 00:06:14.932 ], 00:06:14.932 "driver_specific": { 00:06:14.932 "passthru": { 00:06:14.932 "name": "Passthru0", 00:06:14.932 "base_bdev_name": "Malloc0" 00:06:14.932 } 00:06:14.932 } 00:06:14.932 } 00:06:14.932 ]' 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:14.932 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.933 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.933 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.933 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:14.933 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:14.933 17:22:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:14.933 00:06:14.933 real 0m0.298s 00:06:14.933 user 0m0.193s 00:06:14.933 sys 0m0.036s 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.933 17:22:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.933 ************************************ 00:06:14.933 END TEST rpc_integrity 00:06:14.933 ************************************ 00:06:15.194 17:22:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:15.194 17:22:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.194 17:22:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.194 17:22:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 ************************************ 00:06:15.194 START TEST rpc_plugins 00:06:15.194 ************************************ 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:15.194 { 00:06:15.194 "name": "Malloc1", 00:06:15.194 "aliases": [ 00:06:15.194 "1b089fec-52c8-4ac8-8134-f667cb87d9d9" 00:06:15.194 ], 00:06:15.194 "product_name": "Malloc disk", 00:06:15.194 "block_size": 4096, 00:06:15.194 "num_blocks": 256, 00:06:15.194 "uuid": "1b089fec-52c8-4ac8-8134-f667cb87d9d9", 00:06:15.194 "assigned_rate_limits": { 00:06:15.194 "rw_ios_per_sec": 0, 00:06:15.194 "rw_mbytes_per_sec": 0, 00:06:15.194 "r_mbytes_per_sec": 0, 00:06:15.194 "w_mbytes_per_sec": 0 00:06:15.194 }, 00:06:15.194 "claimed": false, 00:06:15.194 "zoned": false, 00:06:15.194 "supported_io_types": { 00:06:15.194 "read": true, 00:06:15.194 "write": true, 00:06:15.194 "unmap": true, 00:06:15.194 "flush": true, 00:06:15.194 "reset": true, 00:06:15.194 "nvme_admin": false, 00:06:15.194 "nvme_io": false, 00:06:15.194 "nvme_io_md": false, 00:06:15.194 "write_zeroes": true, 00:06:15.194 "zcopy": true, 00:06:15.194 "get_zone_info": false, 00:06:15.194 "zone_management": false, 00:06:15.194 "zone_append": false, 00:06:15.194 "compare": false, 00:06:15.194 "compare_and_write": false, 00:06:15.194 "abort": true, 00:06:15.194 "seek_hole": false, 00:06:15.194 "seek_data": false, 00:06:15.194 "copy": true, 00:06:15.194 "nvme_iov_md": false 00:06:15.194 }, 00:06:15.194 "memory_domains": [ 00:06:15.194 { 00:06:15.194 "dma_device_id": "system", 00:06:15.194 "dma_device_type": 1 00:06:15.194 }, 00:06:15.194 { 00:06:15.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.194 "dma_device_type": 2 00:06:15.194 } 00:06:15.194 ], 00:06:15.194 "driver_specific": {} 00:06:15.194 } 00:06:15.194 ]' 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.194 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.194 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.195 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.195 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:15.195 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:15.195 17:22:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:15.195 00:06:15.195 real 0m0.156s 00:06:15.195 user 0m0.095s 00:06:15.195 sys 0m0.024s 00:06:15.195 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.195 17:22:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:15.195 ************************************ 00:06:15.195 END TEST rpc_plugins 00:06:15.195 ************************************ 00:06:15.455 17:22:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:15.455 17:22:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.455 17:22:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.455 17:22:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.455 ************************************ 00:06:15.455 START TEST rpc_trace_cmd_test 00:06:15.455 ************************************ 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:15.455 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1473110", 00:06:15.455 "tpoint_group_mask": "0x8", 00:06:15.455 "iscsi_conn": { 00:06:15.455 "mask": "0x2", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "scsi": { 00:06:15.455 "mask": "0x4", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "bdev": { 00:06:15.455 "mask": "0x8", 00:06:15.455 "tpoint_mask": "0xffffffffffffffff" 00:06:15.455 }, 00:06:15.455 "nvmf_rdma": { 00:06:15.455 "mask": "0x10", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "nvmf_tcp": { 00:06:15.455 "mask": "0x20", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "ftl": { 00:06:15.455 "mask": "0x40", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "blobfs": { 00:06:15.455 "mask": "0x80", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "dsa": { 00:06:15.455 "mask": "0x200", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "thread": { 00:06:15.455 "mask": "0x400", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "nvme_pcie": { 00:06:15.455 "mask": "0x800", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "iaa": { 00:06:15.455 "mask": "0x1000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "nvme_tcp": { 00:06:15.455 "mask": "0x2000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "bdev_nvme": { 00:06:15.455 "mask": "0x4000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "sock": { 00:06:15.455 "mask": "0x8000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "blob": { 00:06:15.455 "mask": "0x10000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "bdev_raid": { 00:06:15.455 "mask": "0x20000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 }, 00:06:15.455 "scheduler": { 00:06:15.455 "mask": "0x40000", 00:06:15.455 "tpoint_mask": "0x0" 00:06:15.455 } 00:06:15.455 }' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:15.455 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:15.716 17:22:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:15.716 00:06:15.716 real 0m0.253s 00:06:15.716 user 0m0.209s 00:06:15.716 sys 0m0.037s 00:06:15.716 17:22:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.716 17:22:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 ************************************ 00:06:15.716 END TEST rpc_trace_cmd_test 00:06:15.716 ************************************ 00:06:15.716 17:22:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:15.716 17:22:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:15.716 17:22:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:15.716 17:22:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.716 17:22:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.716 17:22:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 ************************************ 00:06:15.716 START TEST rpc_daemon_integrity 00:06:15.716 ************************************ 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.716 { 00:06:15.716 "name": "Malloc2", 00:06:15.716 "aliases": [ 00:06:15.716 "dd4caeb2-30d8-4f41-8ed1-3c0b56c46b68" 00:06:15.716 ], 00:06:15.716 "product_name": "Malloc disk", 00:06:15.716 "block_size": 512, 00:06:15.716 "num_blocks": 16384, 00:06:15.716 "uuid": "dd4caeb2-30d8-4f41-8ed1-3c0b56c46b68", 00:06:15.716 "assigned_rate_limits": { 00:06:15.716 "rw_ios_per_sec": 0, 00:06:15.716 "rw_mbytes_per_sec": 0, 00:06:15.716 "r_mbytes_per_sec": 0, 00:06:15.716 "w_mbytes_per_sec": 0 00:06:15.716 }, 00:06:15.716 "claimed": false, 00:06:15.716 "zoned": false, 00:06:15.716 "supported_io_types": { 00:06:15.716 "read": true, 00:06:15.716 "write": true, 00:06:15.716 "unmap": true, 00:06:15.716 "flush": true, 00:06:15.716 "reset": true, 00:06:15.716 "nvme_admin": false, 00:06:15.716 "nvme_io": false, 00:06:15.716 "nvme_io_md": false, 00:06:15.716 "write_zeroes": true, 00:06:15.716 "zcopy": true, 00:06:15.716 "get_zone_info": false, 00:06:15.716 "zone_management": false, 00:06:15.716 "zone_append": false, 00:06:15.716 "compare": false, 00:06:15.716 "compare_and_write": false, 00:06:15.716 "abort": true, 00:06:15.716 "seek_hole": false, 00:06:15.716 "seek_data": false, 00:06:15.716 "copy": true, 00:06:15.716 "nvme_iov_md": false 00:06:15.716 }, 00:06:15.716 "memory_domains": [ 00:06:15.716 { 00:06:15.716 "dma_device_id": "system", 00:06:15.716 "dma_device_type": 1 00:06:15.716 }, 00:06:15.716 { 00:06:15.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.716 "dma_device_type": 2 00:06:15.716 } 00:06:15.716 ], 00:06:15.716 "driver_specific": {} 00:06:15.716 } 00:06:15.716 ]' 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.716 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.716 [2024-12-06 17:22:07.780414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:15.716 [2024-12-06 17:22:07.780458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.716 [2024-12-06 17:22:07.780473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x142d2f0 00:06:15.716 [2024-12-06 17:22:07.780481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.977 [2024-12-06 17:22:07.782002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.977 [2024-12-06 17:22:07.782040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.977 Passthru0 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.977 { 00:06:15.977 "name": "Malloc2", 00:06:15.977 "aliases": [ 00:06:15.977 "dd4caeb2-30d8-4f41-8ed1-3c0b56c46b68" 00:06:15.977 ], 00:06:15.977 "product_name": "Malloc disk", 00:06:15.977 "block_size": 512, 00:06:15.977 "num_blocks": 16384, 00:06:15.977 "uuid": "dd4caeb2-30d8-4f41-8ed1-3c0b56c46b68", 00:06:15.977 "assigned_rate_limits": { 00:06:15.977 "rw_ios_per_sec": 0, 00:06:15.977 "rw_mbytes_per_sec": 0, 00:06:15.977 "r_mbytes_per_sec": 0, 00:06:15.977 "w_mbytes_per_sec": 0 00:06:15.977 }, 00:06:15.977 "claimed": true, 00:06:15.977 "claim_type": "exclusive_write", 00:06:15.977 "zoned": false, 00:06:15.977 "supported_io_types": { 00:06:15.977 "read": true, 00:06:15.977 "write": true, 00:06:15.977 "unmap": true, 00:06:15.977 "flush": true, 00:06:15.977 "reset": true, 00:06:15.977 "nvme_admin": false, 00:06:15.977 "nvme_io": false, 00:06:15.977 "nvme_io_md": false, 00:06:15.977 "write_zeroes": true, 00:06:15.977 "zcopy": true, 00:06:15.977 "get_zone_info": false, 00:06:15.977 "zone_management": false, 00:06:15.977 "zone_append": false, 00:06:15.977 "compare": false, 00:06:15.977 "compare_and_write": false, 00:06:15.977 "abort": true, 00:06:15.977 "seek_hole": false, 00:06:15.977 "seek_data": false, 00:06:15.977 "copy": true, 00:06:15.977 "nvme_iov_md": false 00:06:15.977 }, 00:06:15.977 "memory_domains": [ 00:06:15.977 { 00:06:15.977 "dma_device_id": "system", 00:06:15.977 "dma_device_type": 1 00:06:15.977 }, 00:06:15.977 { 00:06:15.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.977 "dma_device_type": 2 00:06:15.977 } 00:06:15.977 ], 00:06:15.977 "driver_specific": {} 00:06:15.977 }, 00:06:15.977 { 00:06:15.977 "name": "Passthru0", 00:06:15.977 "aliases": [ 00:06:15.977 "f48d859f-09d3-50be-8e20-05ae0c1ebb46" 00:06:15.977 ], 00:06:15.977 "product_name": "passthru", 00:06:15.977 "block_size": 512, 00:06:15.977 "num_blocks": 16384, 00:06:15.977 "uuid": "f48d859f-09d3-50be-8e20-05ae0c1ebb46", 00:06:15.977 "assigned_rate_limits": { 00:06:15.977 "rw_ios_per_sec": 0, 00:06:15.977 "rw_mbytes_per_sec": 0, 00:06:15.977 "r_mbytes_per_sec": 0, 00:06:15.977 "w_mbytes_per_sec": 0 00:06:15.977 }, 00:06:15.977 "claimed": false, 00:06:15.977 "zoned": false, 00:06:15.977 "supported_io_types": { 00:06:15.977 "read": true, 00:06:15.977 "write": true, 00:06:15.977 "unmap": true, 00:06:15.977 "flush": true, 00:06:15.977 "reset": true, 00:06:15.977 "nvme_admin": false, 00:06:15.977 "nvme_io": false, 00:06:15.977 "nvme_io_md": false, 00:06:15.977 "write_zeroes": true, 00:06:15.977 "zcopy": true, 00:06:15.977 "get_zone_info": false, 00:06:15.977 "zone_management": false, 00:06:15.977 "zone_append": false, 00:06:15.977 "compare": false, 00:06:15.977 "compare_and_write": false, 00:06:15.977 "abort": true, 00:06:15.977 "seek_hole": false, 00:06:15.977 "seek_data": false, 00:06:15.977 "copy": true, 00:06:15.977 "nvme_iov_md": false 00:06:15.977 }, 00:06:15.977 "memory_domains": [ 00:06:15.977 { 00:06:15.977 "dma_device_id": "system", 00:06:15.977 "dma_device_type": 1 00:06:15.977 }, 00:06:15.977 { 00:06:15.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.977 "dma_device_type": 2 00:06:15.977 } 00:06:15.977 ], 00:06:15.977 "driver_specific": { 00:06:15.977 "passthru": { 00:06:15.977 "name": "Passthru0", 00:06:15.977 "base_bdev_name": "Malloc2" 00:06:15.977 } 00:06:15.977 } 00:06:15.977 } 00:06:15.977 ]' 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.977 00:06:15.977 real 0m0.300s 00:06:15.977 user 0m0.180s 00:06:15.977 sys 0m0.049s 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.977 17:22:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.977 ************************************ 00:06:15.977 END TEST rpc_daemon_integrity 00:06:15.977 ************************************ 00:06:15.977 17:22:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:15.977 17:22:07 rpc -- rpc/rpc.sh@84 -- # killprocess 1473110 00:06:15.977 17:22:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 1473110 ']' 00:06:15.977 17:22:07 rpc -- common/autotest_common.sh@958 -- # kill -0 1473110 00:06:15.977 17:22:07 rpc -- common/autotest_common.sh@959 -- # uname 00:06:15.977 17:22:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.977 17:22:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1473110 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1473110' 00:06:16.237 killing process with pid 1473110 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@973 -- # kill 1473110 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@978 -- # wait 1473110 00:06:16.237 00:06:16.237 real 0m2.700s 00:06:16.237 user 0m3.445s 00:06:16.237 sys 0m0.822s 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.237 17:22:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.237 ************************************ 00:06:16.237 END TEST rpc 00:06:16.237 ************************************ 00:06:16.497 17:22:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:16.497 17:22:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.497 17:22:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.497 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.497 ************************************ 00:06:16.497 START TEST skip_rpc 00:06:16.497 ************************************ 00:06:16.497 17:22:08 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:16.497 * Looking for test storage... 00:06:16.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:16.497 17:22:08 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.497 17:22:08 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.497 17:22:08 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.497 17:22:08 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.497 17:22:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.498 17:22:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.758 17:22:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:16.758 17:22:08 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.758 17:22:08 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.758 --rc genhtml_branch_coverage=1 00:06:16.758 --rc genhtml_function_coverage=1 00:06:16.758 --rc genhtml_legend=1 00:06:16.758 --rc geninfo_all_blocks=1 00:06:16.758 --rc geninfo_unexecuted_blocks=1 00:06:16.758 00:06:16.758 ' 00:06:16.758 17:22:08 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.759 --rc genhtml_branch_coverage=1 00:06:16.759 --rc genhtml_function_coverage=1 00:06:16.759 --rc genhtml_legend=1 00:06:16.759 --rc geninfo_all_blocks=1 00:06:16.759 --rc geninfo_unexecuted_blocks=1 00:06:16.759 00:06:16.759 ' 00:06:16.759 17:22:08 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.759 --rc genhtml_branch_coverage=1 00:06:16.759 --rc genhtml_function_coverage=1 00:06:16.759 --rc genhtml_legend=1 00:06:16.759 --rc geninfo_all_blocks=1 00:06:16.759 --rc geninfo_unexecuted_blocks=1 00:06:16.759 00:06:16.759 ' 00:06:16.759 17:22:08 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.759 --rc genhtml_branch_coverage=1 00:06:16.759 --rc genhtml_function_coverage=1 00:06:16.759 --rc genhtml_legend=1 00:06:16.759 --rc geninfo_all_blocks=1 00:06:16.759 --rc geninfo_unexecuted_blocks=1 00:06:16.759 00:06:16.759 ' 00:06:16.759 17:22:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.759 17:22:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:16.759 17:22:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:16.759 17:22:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.759 17:22:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.759 17:22:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.759 ************************************ 00:06:16.759 START TEST skip_rpc 00:06:16.759 ************************************ 00:06:16.759 17:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:16.759 17:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1473957 00:06:16.759 17:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.759 17:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:16.759 17:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:16.759 [2024-12-06 17:22:08.669609] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:16.759 [2024-12-06 17:22:08.669674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473957 ] 00:06:16.759 [2024-12-06 17:22:08.763076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.759 [2024-12-06 17:22:08.817190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1473957 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1473957 ']' 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1473957 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1473957 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1473957' 00:06:22.052 killing process with pid 1473957 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1473957 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1473957 00:06:22.052 00:06:22.052 real 0m5.266s 00:06:22.052 user 0m5.013s 00:06:22.052 sys 0m0.301s 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.052 17:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.052 ************************************ 00:06:22.052 END TEST skip_rpc 00:06:22.052 ************************************ 00:06:22.052 17:22:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:22.052 17:22:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.052 17:22:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.052 17:22:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.052 ************************************ 00:06:22.052 START TEST skip_rpc_with_json 00:06:22.052 ************************************ 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1474996 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1474996 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1474996 ']' 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.052 17:22:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.052 [2024-12-06 17:22:14.005659] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:22.052 [2024-12-06 17:22:14.005706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474996 ] 00:06:22.052 [2024-12-06 17:22:14.087312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.311 [2024-12-06 17:22:14.118199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.879 [2024-12-06 17:22:14.802613] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:22.879 request: 00:06:22.879 { 00:06:22.879 "trtype": "tcp", 00:06:22.879 "method": "nvmf_get_transports", 00:06:22.879 "req_id": 1 00:06:22.879 } 00:06:22.879 Got JSON-RPC error response 00:06:22.879 response: 00:06:22.879 { 00:06:22.879 "code": -19, 00:06:22.879 "message": "No such device" 00:06:22.879 } 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.879 [2024-12-06 17:22:14.814719] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.879 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.165 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.165 17:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.165 { 00:06:23.165 "subsystems": [ 00:06:23.165 { 00:06:23.165 "subsystem": "fsdev", 00:06:23.165 "config": [ 00:06:23.165 { 00:06:23.165 "method": "fsdev_set_opts", 00:06:23.165 "params": { 00:06:23.165 "fsdev_io_pool_size": 65535, 00:06:23.165 "fsdev_io_cache_size": 256 00:06:23.165 } 00:06:23.165 } 00:06:23.165 ] 00:06:23.165 }, 00:06:23.165 { 00:06:23.165 "subsystem": "vfio_user_target", 00:06:23.165 "config": null 00:06:23.165 }, 00:06:23.165 { 00:06:23.165 "subsystem": "keyring", 00:06:23.165 "config": [] 00:06:23.165 }, 00:06:23.165 { 00:06:23.165 "subsystem": "iobuf", 00:06:23.165 "config": [ 00:06:23.165 { 00:06:23.166 "method": "iobuf_set_options", 00:06:23.166 "params": { 00:06:23.166 "small_pool_count": 8192, 00:06:23.166 "large_pool_count": 1024, 00:06:23.166 "small_bufsize": 8192, 00:06:23.166 "large_bufsize": 135168, 00:06:23.166 "enable_numa": false 00:06:23.166 } 00:06:23.166 } 00:06:23.166 ] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "sock", 00:06:23.166 "config": [ 00:06:23.166 { 00:06:23.166 "method": "sock_set_default_impl", 00:06:23.166 "params": { 00:06:23.166 "impl_name": "posix" 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "sock_impl_set_options", 00:06:23.166 "params": { 00:06:23.166 "impl_name": "ssl", 00:06:23.166 "recv_buf_size": 4096, 00:06:23.166 "send_buf_size": 4096, 00:06:23.166 "enable_recv_pipe": true, 00:06:23.166 "enable_quickack": false, 00:06:23.166 "enable_placement_id": 0, 00:06:23.166 "enable_zerocopy_send_server": true, 00:06:23.166 "enable_zerocopy_send_client": false, 00:06:23.166 "zerocopy_threshold": 0, 00:06:23.166 "tls_version": 0, 00:06:23.166 "enable_ktls": false 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "sock_impl_set_options", 00:06:23.166 "params": { 00:06:23.166 "impl_name": "posix", 00:06:23.166 "recv_buf_size": 2097152, 00:06:23.166 "send_buf_size": 2097152, 00:06:23.166 "enable_recv_pipe": true, 00:06:23.166 "enable_quickack": false, 00:06:23.166 "enable_placement_id": 0, 00:06:23.166 "enable_zerocopy_send_server": true, 00:06:23.166 "enable_zerocopy_send_client": false, 00:06:23.166 "zerocopy_threshold": 0, 00:06:23.166 "tls_version": 0, 00:06:23.166 "enable_ktls": false 00:06:23.166 } 00:06:23.166 } 00:06:23.166 ] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "vmd", 00:06:23.166 "config": [] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "accel", 00:06:23.166 "config": [ 00:06:23.166 { 00:06:23.166 "method": "accel_set_options", 00:06:23.166 "params": { 00:06:23.166 "small_cache_size": 128, 00:06:23.166 "large_cache_size": 16, 00:06:23.166 "task_count": 2048, 00:06:23.166 "sequence_count": 2048, 00:06:23.166 "buf_count": 2048 00:06:23.166 } 00:06:23.166 } 00:06:23.166 ] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "bdev", 00:06:23.166 "config": [ 00:06:23.166 { 00:06:23.166 "method": "bdev_set_options", 00:06:23.166 "params": { 00:06:23.166 "bdev_io_pool_size": 65535, 00:06:23.166 "bdev_io_cache_size": 256, 00:06:23.166 "bdev_auto_examine": true, 00:06:23.166 "iobuf_small_cache_size": 128, 00:06:23.166 "iobuf_large_cache_size": 16 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "bdev_raid_set_options", 00:06:23.166 "params": { 00:06:23.166 "process_window_size_kb": 1024, 00:06:23.166 "process_max_bandwidth_mb_sec": 0 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "bdev_iscsi_set_options", 00:06:23.166 "params": { 00:06:23.166 "timeout_sec": 30 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "bdev_nvme_set_options", 00:06:23.166 "params": { 00:06:23.166 "action_on_timeout": "none", 00:06:23.166 "timeout_us": 0, 00:06:23.166 "timeout_admin_us": 0, 00:06:23.166 "keep_alive_timeout_ms": 10000, 00:06:23.166 "arbitration_burst": 0, 00:06:23.166 "low_priority_weight": 0, 00:06:23.166 "medium_priority_weight": 0, 00:06:23.166 "high_priority_weight": 0, 00:06:23.166 "nvme_adminq_poll_period_us": 10000, 00:06:23.166 "nvme_ioq_poll_period_us": 0, 00:06:23.166 "io_queue_requests": 0, 00:06:23.166 "delay_cmd_submit": true, 00:06:23.166 "transport_retry_count": 4, 00:06:23.166 "bdev_retry_count": 3, 00:06:23.166 "transport_ack_timeout": 0, 00:06:23.166 "ctrlr_loss_timeout_sec": 0, 00:06:23.166 "reconnect_delay_sec": 0, 00:06:23.166 "fast_io_fail_timeout_sec": 0, 00:06:23.166 "disable_auto_failback": false, 00:06:23.166 "generate_uuids": false, 00:06:23.166 "transport_tos": 0, 00:06:23.166 "nvme_error_stat": false, 00:06:23.166 "rdma_srq_size": 0, 00:06:23.166 "io_path_stat": false, 00:06:23.166 "allow_accel_sequence": false, 00:06:23.166 "rdma_max_cq_size": 0, 00:06:23.166 "rdma_cm_event_timeout_ms": 0, 00:06:23.166 "dhchap_digests": [ 00:06:23.166 "sha256", 00:06:23.166 "sha384", 00:06:23.166 "sha512" 00:06:23.166 ], 00:06:23.166 "dhchap_dhgroups": [ 00:06:23.166 "null", 00:06:23.166 "ffdhe2048", 00:06:23.166 "ffdhe3072", 00:06:23.166 "ffdhe4096", 00:06:23.166 "ffdhe6144", 00:06:23.166 "ffdhe8192" 00:06:23.166 ] 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "bdev_nvme_set_hotplug", 00:06:23.166 "params": { 00:06:23.166 "period_us": 100000, 00:06:23.166 "enable": false 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "bdev_wait_for_examine" 00:06:23.166 } 00:06:23.166 ] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "scsi", 00:06:23.166 "config": null 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "scheduler", 00:06:23.166 "config": [ 00:06:23.166 { 00:06:23.166 "method": "framework_set_scheduler", 00:06:23.166 "params": { 00:06:23.166 "name": "static" 00:06:23.166 } 00:06:23.166 } 00:06:23.166 ] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "vhost_scsi", 00:06:23.166 "config": [] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "vhost_blk", 00:06:23.166 "config": [] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "ublk", 00:06:23.166 "config": [] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "nbd", 00:06:23.166 "config": [] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "nvmf", 00:06:23.166 "config": [ 00:06:23.166 { 00:06:23.166 "method": "nvmf_set_config", 00:06:23.166 "params": { 00:06:23.166 "discovery_filter": "match_any", 00:06:23.166 "admin_cmd_passthru": { 00:06:23.166 "identify_ctrlr": false 00:06:23.166 }, 00:06:23.166 "dhchap_digests": [ 00:06:23.166 "sha256", 00:06:23.166 "sha384", 00:06:23.166 "sha512" 00:06:23.166 ], 00:06:23.166 "dhchap_dhgroups": [ 00:06:23.166 "null", 00:06:23.166 "ffdhe2048", 00:06:23.166 "ffdhe3072", 00:06:23.166 "ffdhe4096", 00:06:23.166 "ffdhe6144", 00:06:23.166 "ffdhe8192" 00:06:23.166 ] 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "nvmf_set_max_subsystems", 00:06:23.166 "params": { 00:06:23.166 "max_subsystems": 1024 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "nvmf_set_crdt", 00:06:23.166 "params": { 00:06:23.166 "crdt1": 0, 00:06:23.166 "crdt2": 0, 00:06:23.166 "crdt3": 0 00:06:23.166 } 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "method": "nvmf_create_transport", 00:06:23.166 "params": { 00:06:23.166 "trtype": "TCP", 00:06:23.166 "max_queue_depth": 128, 00:06:23.166 "max_io_qpairs_per_ctrlr": 127, 00:06:23.166 "in_capsule_data_size": 4096, 00:06:23.166 "max_io_size": 131072, 00:06:23.166 "io_unit_size": 131072, 00:06:23.166 "max_aq_depth": 128, 00:06:23.166 "num_shared_buffers": 511, 00:06:23.166 "buf_cache_size": 4294967295, 00:06:23.166 "dif_insert_or_strip": false, 00:06:23.166 "zcopy": false, 00:06:23.166 "c2h_success": true, 00:06:23.166 "sock_priority": 0, 00:06:23.166 "abort_timeout_sec": 1, 00:06:23.166 "ack_timeout": 0, 00:06:23.166 "data_wr_pool_size": 0 00:06:23.166 } 00:06:23.166 } 00:06:23.166 ] 00:06:23.166 }, 00:06:23.166 { 00:06:23.166 "subsystem": "iscsi", 00:06:23.166 "config": [ 00:06:23.166 { 00:06:23.166 "method": "iscsi_set_options", 00:06:23.166 "params": { 00:06:23.166 "node_base": "iqn.2016-06.io.spdk", 00:06:23.166 "max_sessions": 128, 00:06:23.166 "max_connections_per_session": 2, 00:06:23.166 "max_queue_depth": 64, 00:06:23.166 "default_time2wait": 2, 00:06:23.166 "default_time2retain": 20, 00:06:23.166 "first_burst_length": 8192, 00:06:23.166 "immediate_data": true, 00:06:23.166 "allow_duplicated_isid": false, 00:06:23.166 "error_recovery_level": 0, 00:06:23.166 "nop_timeout": 60, 00:06:23.166 "nop_in_interval": 30, 00:06:23.166 "disable_chap": false, 00:06:23.166 "require_chap": false, 00:06:23.166 "mutual_chap": false, 00:06:23.166 "chap_group": 0, 00:06:23.167 "max_large_datain_per_connection": 64, 00:06:23.167 "max_r2t_per_connection": 4, 00:06:23.167 "pdu_pool_size": 36864, 00:06:23.167 "immediate_data_pool_size": 16384, 00:06:23.167 "data_out_pool_size": 2048 00:06:23.167 } 00:06:23.167 } 00:06:23.167 ] 00:06:23.167 } 00:06:23.167 ] 00:06:23.167 } 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1474996 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1474996 ']' 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1474996 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.167 17:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1474996 00:06:23.167 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.167 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.167 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1474996' 00:06:23.167 killing process with pid 1474996 00:06:23.167 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1474996 00:06:23.167 17:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1474996 00:06:23.426 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1475336 00:06:23.426 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:23.426 17:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1475336 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1475336 ']' 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1475336 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475336 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.716 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475336' 00:06:28.717 killing process with pid 1475336 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1475336 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1475336 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:28.717 00:06:28.717 real 0m6.558s 00:06:28.717 user 0m6.487s 00:06:28.717 sys 0m0.550s 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.717 ************************************ 00:06:28.717 END TEST skip_rpc_with_json 00:06:28.717 ************************************ 00:06:28.717 17:22:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:28.717 17:22:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.717 17:22:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.717 17:22:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.717 ************************************ 00:06:28.717 START TEST skip_rpc_with_delay 00:06:28.717 ************************************ 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.717 [2024-12-06 17:22:20.650574] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.717 00:06:28.717 real 0m0.079s 00:06:28.717 user 0m0.052s 00:06:28.717 sys 0m0.026s 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.717 17:22:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:28.717 ************************************ 00:06:28.717 END TEST skip_rpc_with_delay 00:06:28.717 ************************************ 00:06:28.717 17:22:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:28.717 17:22:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:28.717 17:22:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:28.717 17:22:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.717 17:22:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.717 17:22:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.717 ************************************ 00:06:28.717 START TEST exit_on_failed_rpc_init 00:06:28.717 ************************************ 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1476410 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1476410 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1476410 ']' 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.717 17:22:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:28.978 [2024-12-06 17:22:20.814622] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:28.978 [2024-12-06 17:22:20.814702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476410 ] 00:06:28.978 [2024-12-06 17:22:20.902254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.978 [2024-12-06 17:22:20.937754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:29.549 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.809 [2024-12-06 17:22:21.676441] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:29.809 [2024-12-06 17:22:21.676492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476575 ] 00:06:29.809 [2024-12-06 17:22:21.763412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.809 [2024-12-06 17:22:21.799125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.809 [2024-12-06 17:22:21.799176] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:29.809 [2024-12-06 17:22:21.799186] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:29.809 [2024-12-06 17:22:21.799193] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1476410 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1476410 ']' 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1476410 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.809 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476410 00:06:30.069 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.069 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.069 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476410' 00:06:30.070 killing process with pid 1476410 00:06:30.070 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1476410 00:06:30.070 17:22:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1476410 00:06:30.070 00:06:30.070 real 0m1.334s 00:06:30.070 user 0m1.564s 00:06:30.070 sys 0m0.392s 00:06:30.070 17:22:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.070 17:22:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:30.070 ************************************ 00:06:30.070 END TEST exit_on_failed_rpc_init 00:06:30.070 ************************************ 00:06:30.070 17:22:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:30.070 00:06:30.070 real 0m13.763s 00:06:30.070 user 0m13.349s 00:06:30.070 sys 0m1.591s 00:06:30.070 17:22:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.070 17:22:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.070 ************************************ 00:06:30.070 END TEST skip_rpc 00:06:30.070 ************************************ 00:06:30.331 17:22:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:30.331 17:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.331 17:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.331 17:22:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.331 ************************************ 00:06:30.331 START TEST rpc_client 00:06:30.331 ************************************ 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:30.331 * Looking for test storage... 00:06:30.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.331 17:22:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.331 --rc genhtml_branch_coverage=1 00:06:30.331 --rc genhtml_function_coverage=1 00:06:30.331 --rc genhtml_legend=1 00:06:30.331 --rc geninfo_all_blocks=1 00:06:30.331 --rc geninfo_unexecuted_blocks=1 00:06:30.331 00:06:30.331 ' 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.331 --rc genhtml_branch_coverage=1 00:06:30.331 --rc genhtml_function_coverage=1 00:06:30.331 --rc genhtml_legend=1 00:06:30.331 --rc geninfo_all_blocks=1 00:06:30.331 --rc geninfo_unexecuted_blocks=1 00:06:30.331 00:06:30.331 ' 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.331 --rc genhtml_branch_coverage=1 00:06:30.331 --rc genhtml_function_coverage=1 00:06:30.331 --rc genhtml_legend=1 00:06:30.331 --rc geninfo_all_blocks=1 00:06:30.331 --rc geninfo_unexecuted_blocks=1 00:06:30.331 00:06:30.331 ' 00:06:30.331 17:22:22 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.331 --rc genhtml_branch_coverage=1 00:06:30.331 --rc genhtml_function_coverage=1 00:06:30.331 --rc genhtml_legend=1 00:06:30.331 --rc geninfo_all_blocks=1 00:06:30.331 --rc geninfo_unexecuted_blocks=1 00:06:30.331 00:06:30.331 ' 00:06:30.331 17:22:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:30.594 OK 00:06:30.594 17:22:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:30.594 00:06:30.594 real 0m0.221s 00:06:30.594 user 0m0.135s 00:06:30.594 sys 0m0.100s 00:06:30.594 17:22:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.594 17:22:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:30.594 ************************************ 00:06:30.594 END TEST rpc_client 00:06:30.594 ************************************ 00:06:30.594 17:22:22 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.594 17:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.594 17:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.594 17:22:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.594 ************************************ 00:06:30.594 START TEST json_config 00:06:30.594 ************************************ 00:06:30.594 17:22:22 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.594 17:22:22 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.594 17:22:22 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.594 17:22:22 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.594 17:22:22 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.594 17:22:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.594 17:22:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.594 17:22:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.594 17:22:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.594 17:22:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.594 17:22:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.594 17:22:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.594 17:22:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.594 17:22:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.856 17:22:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.856 17:22:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.856 17:22:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:30.856 17:22:22 json_config -- scripts/common.sh@345 -- # : 1 00:06:30.856 17:22:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.856 17:22:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.856 17:22:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:30.856 17:22:22 json_config -- scripts/common.sh@353 -- # local d=1 00:06:30.856 17:22:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.856 17:22:22 json_config -- scripts/common.sh@355 -- # echo 1 00:06:30.856 17:22:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.856 17:22:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:30.856 17:22:22 json_config -- scripts/common.sh@353 -- # local d=2 00:06:30.856 17:22:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.856 17:22:22 json_config -- scripts/common.sh@355 -- # echo 2 00:06:30.856 17:22:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.856 17:22:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.856 17:22:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.856 17:22:22 json_config -- scripts/common.sh@368 -- # return 0 00:06:30.856 17:22:22 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.856 17:22:22 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.856 --rc genhtml_branch_coverage=1 00:06:30.856 --rc genhtml_function_coverage=1 00:06:30.856 --rc genhtml_legend=1 00:06:30.856 --rc geninfo_all_blocks=1 00:06:30.856 --rc geninfo_unexecuted_blocks=1 00:06:30.856 00:06:30.856 ' 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.857 --rc genhtml_branch_coverage=1 00:06:30.857 --rc genhtml_function_coverage=1 00:06:30.857 --rc genhtml_legend=1 00:06:30.857 --rc geninfo_all_blocks=1 00:06:30.857 --rc geninfo_unexecuted_blocks=1 00:06:30.857 00:06:30.857 ' 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.857 --rc genhtml_branch_coverage=1 00:06:30.857 --rc genhtml_function_coverage=1 00:06:30.857 --rc genhtml_legend=1 00:06:30.857 --rc geninfo_all_blocks=1 00:06:30.857 --rc geninfo_unexecuted_blocks=1 00:06:30.857 00:06:30.857 ' 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.857 --rc genhtml_branch_coverage=1 00:06:30.857 --rc genhtml_function_coverage=1 00:06:30.857 --rc genhtml_legend=1 00:06:30.857 --rc geninfo_all_blocks=1 00:06:30.857 --rc geninfo_unexecuted_blocks=1 00:06:30.857 00:06:30.857 ' 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.857 17:22:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.857 17:22:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.857 17:22:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.857 17:22:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.857 17:22:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.857 17:22:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.857 17:22:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.857 17:22:22 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.857 17:22:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@51 -- # : 0 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.857 17:22:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:30.857 INFO: JSON configuration test init 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.857 17:22:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:30.857 17:22:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.857 17:22:22 json_config -- json_config/common.sh@10 -- # shift 00:06:30.857 17:22:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.857 17:22:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.857 17:22:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.857 17:22:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.857 17:22:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.857 17:22:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1476873 00:06:30.857 17:22:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.857 Waiting for target to run... 00:06:30.857 17:22:22 json_config -- json_config/common.sh@25 -- # waitforlisten 1476873 /var/tmp/spdk_tgt.sock 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 1476873 ']' 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.857 17:22:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.857 17:22:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.857 [2024-12-06 17:22:22.786768] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:30.857 [2024-12-06 17:22:22.786842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476873 ] 00:06:31.118 [2024-12-06 17:22:23.117671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.118 [2024-12-06 17:22:23.145909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.689 17:22:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.689 17:22:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:31.689 17:22:23 json_config -- json_config/common.sh@26 -- # echo '' 00:06:31.689 00:06:31.689 17:22:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:31.689 17:22:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:31.689 17:22:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.689 17:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.689 17:22:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:31.689 17:22:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:31.689 17:22:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.689 17:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.689 17:22:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:31.689 17:22:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:31.689 17:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:32.260 17:22:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.260 17:22:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:32.260 17:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:32.260 17:22:24 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@54 -- # sort 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:32.522 17:22:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.522 17:22:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:32.522 17:22:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.522 17:22:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:32.522 17:22:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:32.522 17:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:32.522 MallocForNvmf0 00:06:32.783 17:22:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:32.783 17:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:32.783 MallocForNvmf1 00:06:32.783 17:22:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:32.783 17:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:33.045 [2024-12-06 17:22:24.934943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.045 17:22:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:33.045 17:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:33.305 17:22:25 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:33.305 17:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:33.305 17:22:25 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:33.305 17:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:33.565 17:22:25 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:33.565 17:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:33.874 [2024-12-06 17:22:25.653119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.874 17:22:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:33.874 17:22:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.874 17:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.874 17:22:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:33.874 17:22:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.874 17:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.874 17:22:25 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:33.874 17:22:25 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:33.874 17:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:33.874 MallocBdevForConfigChangeCheck 00:06:34.169 17:22:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:34.169 17:22:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.169 17:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.169 17:22:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:34.169 17:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.486 17:22:26 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:34.486 INFO: shutting down applications... 00:06:34.486 17:22:26 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:34.486 17:22:26 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:34.486 17:22:26 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:34.486 17:22:26 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:34.746 Calling clear_iscsi_subsystem 00:06:34.746 Calling clear_nvmf_subsystem 00:06:34.746 Calling clear_nbd_subsystem 00:06:34.746 Calling clear_ublk_subsystem 00:06:34.746 Calling clear_vhost_blk_subsystem 00:06:34.746 Calling clear_vhost_scsi_subsystem 00:06:34.746 Calling clear_bdev_subsystem 00:06:34.746 17:22:26 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:34.746 17:22:26 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:34.746 17:22:26 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:34.746 17:22:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.746 17:22:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:34.746 17:22:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:35.317 17:22:27 json_config -- json_config/json_config.sh@352 -- # break 00:06:35.317 17:22:27 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:35.317 17:22:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:35.317 17:22:27 json_config -- json_config/common.sh@31 -- # local app=target 00:06:35.317 17:22:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:35.317 17:22:27 json_config -- json_config/common.sh@35 -- # [[ -n 1476873 ]] 00:06:35.317 17:22:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1476873 00:06:35.317 17:22:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:35.317 17:22:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.317 17:22:27 json_config -- json_config/common.sh@41 -- # kill -0 1476873 00:06:35.317 17:22:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.577 17:22:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.577 17:22:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.577 17:22:27 json_config -- json_config/common.sh@41 -- # kill -0 1476873 00:06:35.577 17:22:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.577 17:22:27 json_config -- json_config/common.sh@43 -- # break 00:06:35.577 17:22:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.577 17:22:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.577 SPDK target shutdown done 00:06:35.578 17:22:27 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:35.578 INFO: relaunching applications... 00:06:35.578 17:22:27 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.578 17:22:27 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.578 17:22:27 json_config -- json_config/common.sh@10 -- # shift 00:06:35.578 17:22:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.578 17:22:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.578 17:22:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.578 17:22:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.578 17:22:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.578 17:22:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1478019 00:06:35.578 17:22:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.578 Waiting for target to run... 00:06:35.578 17:22:27 json_config -- json_config/common.sh@25 -- # waitforlisten 1478019 /var/tmp/spdk_tgt.sock 00:06:35.578 17:22:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.578 17:22:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 1478019 ']' 00:06:35.578 17:22:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.578 17:22:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.578 17:22:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.578 17:22:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.578 17:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.839 [2024-12-06 17:22:27.644828] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:35.839 [2024-12-06 17:22:27.644881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478019 ] 00:06:36.100 [2024-12-06 17:22:27.985046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.100 [2024-12-06 17:22:28.016055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.671 [2024-12-06 17:22:28.518634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.671 [2024-12-06 17:22:28.551022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:36.671 17:22:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.671 17:22:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:36.671 17:22:28 json_config -- json_config/common.sh@26 -- # echo '' 00:06:36.671 00:06:36.671 17:22:28 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:36.671 17:22:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:36.671 INFO: Checking if target configuration is the same... 00:06:36.672 17:22:28 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.672 17:22:28 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:36.672 17:22:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:36.672 + '[' 2 -ne 2 ']' 00:06:36.672 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:36.672 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:36.672 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.672 +++ basename /dev/fd/62 00:06:36.672 ++ mktemp /tmp/62.XXX 00:06:36.672 + tmp_file_1=/tmp/62.7Y8 00:06:36.672 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.672 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:36.672 + tmp_file_2=/tmp/spdk_tgt_config.json.y07 00:06:36.672 + ret=0 00:06:36.672 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.932 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.932 + diff -u /tmp/62.7Y8 /tmp/spdk_tgt_config.json.y07 00:06:36.932 + echo 'INFO: JSON config files are the same' 00:06:36.932 INFO: JSON config files are the same 00:06:36.932 + rm /tmp/62.7Y8 /tmp/spdk_tgt_config.json.y07 00:06:36.932 + exit 0 00:06:36.932 17:22:28 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:36.932 17:22:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:36.932 INFO: changing configuration and checking if this can be detected... 00:06:36.932 17:22:28 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:36.932 17:22:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:37.193 17:22:29 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:37.193 17:22:29 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.193 17:22:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.193 + '[' 2 -ne 2 ']' 00:06:37.193 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.193 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.193 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.193 +++ basename /dev/fd/62 00:06:37.193 ++ mktemp /tmp/62.XXX 00:06:37.193 + tmp_file_1=/tmp/62.IuB 00:06:37.193 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.193 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.193 + tmp_file_2=/tmp/spdk_tgt_config.json.dmE 00:06:37.193 + ret=0 00:06:37.193 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.454 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.714 + diff -u /tmp/62.IuB /tmp/spdk_tgt_config.json.dmE 00:06:37.714 + ret=1 00:06:37.714 + echo '=== Start of file: /tmp/62.IuB ===' 00:06:37.714 + cat /tmp/62.IuB 00:06:37.714 + echo '=== End of file: /tmp/62.IuB ===' 00:06:37.714 + echo '' 00:06:37.714 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dmE ===' 00:06:37.714 + cat /tmp/spdk_tgt_config.json.dmE 00:06:37.714 + echo '=== End of file: /tmp/spdk_tgt_config.json.dmE ===' 00:06:37.714 + echo '' 00:06:37.714 + rm /tmp/62.IuB /tmp/spdk_tgt_config.json.dmE 00:06:37.714 + exit 1 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:37.714 INFO: configuration change detected. 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:37.714 17:22:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.714 17:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 1478019 ]] 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:37.714 17:22:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:37.714 17:22:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.715 17:22:29 json_config -- json_config/json_config.sh@330 -- # killprocess 1478019 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@954 -- # '[' -z 1478019 ']' 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@958 -- # kill -0 1478019 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@959 -- # uname 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1478019 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1478019' 00:06:37.715 killing process with pid 1478019 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@973 -- # kill 1478019 00:06:37.715 17:22:29 json_config -- common/autotest_common.sh@978 -- # wait 1478019 00:06:37.975 17:22:29 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.975 17:22:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:37.975 17:22:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.975 17:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.975 17:22:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:37.975 17:22:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:37.975 INFO: Success 00:06:37.975 00:06:37.975 real 0m7.475s 00:06:37.975 user 0m9.010s 00:06:37.975 sys 0m2.035s 00:06:37.975 17:22:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.975 17:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.975 ************************************ 00:06:37.975 END TEST json_config 00:06:37.975 ************************************ 00:06:37.975 17:22:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:37.975 17:22:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.975 17:22:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.975 17:22:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.236 ************************************ 00:06:38.236 START TEST json_config_extra_key 00:06:38.236 ************************************ 00:06:38.236 17:22:30 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:38.236 17:22:30 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.236 17:22:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.236 17:22:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.236 17:22:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:38.237 17:22:30 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.237 17:22:30 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 17:22:30 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 17:22:30 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 17:22:30 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.237 17:22:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.237 17:22:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.237 17:22:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.237 17:22:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.237 17:22:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:38.237 17:22:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.237 17:22:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:38.237 INFO: launching applications... 00:06:38.237 17:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.237 17:22:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.238 17:22:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1478752 00:06:38.238 17:22:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.238 Waiting for target to run... 00:06:38.238 17:22:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1478752 /var/tmp/spdk_tgt.sock 00:06:38.238 17:22:30 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1478752 ']' 00:06:38.238 17:22:30 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.238 17:22:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:38.238 17:22:30 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.238 17:22:30 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.238 17:22:30 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.238 17:22:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.498 [2024-12-06 17:22:30.326254] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:38.498 [2024-12-06 17:22:30.326324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478752 ] 00:06:38.759 [2024-12-06 17:22:30.645610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.759 [2024-12-06 17:22:30.671824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.328 17:22:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.328 17:22:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:39.328 00:06:39.328 17:22:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:39.328 INFO: shutting down applications... 00:06:39.328 17:22:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1478752 ]] 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1478752 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1478752 00:06:39.328 17:22:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1478752 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:39.589 17:22:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:39.589 SPDK target shutdown done 00:06:39.589 17:22:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:39.589 Success 00:06:39.589 00:06:39.589 real 0m1.587s 00:06:39.589 user 0m1.185s 00:06:39.589 sys 0m0.449s 00:06:39.589 17:22:31 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.589 17:22:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:39.589 ************************************ 00:06:39.589 END TEST json_config_extra_key 00:06:39.589 ************************************ 00:06:39.850 17:22:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:39.850 17:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.850 17:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.850 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.850 ************************************ 00:06:39.850 START TEST alias_rpc 00:06:39.850 ************************************ 00:06:39.850 17:22:31 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:39.850 * Looking for test storage... 00:06:39.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:39.850 17:22:31 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.850 17:22:31 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.850 17:22:31 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.850 17:22:31 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:39.850 17:22:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.111 17:22:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.111 --rc genhtml_branch_coverage=1 00:06:40.111 --rc genhtml_function_coverage=1 00:06:40.111 --rc genhtml_legend=1 00:06:40.111 --rc geninfo_all_blocks=1 00:06:40.111 --rc geninfo_unexecuted_blocks=1 00:06:40.111 00:06:40.111 ' 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.111 --rc genhtml_branch_coverage=1 00:06:40.111 --rc genhtml_function_coverage=1 00:06:40.111 --rc genhtml_legend=1 00:06:40.111 --rc geninfo_all_blocks=1 00:06:40.111 --rc geninfo_unexecuted_blocks=1 00:06:40.111 00:06:40.111 ' 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.111 --rc genhtml_branch_coverage=1 00:06:40.111 --rc genhtml_function_coverage=1 00:06:40.111 --rc genhtml_legend=1 00:06:40.111 --rc geninfo_all_blocks=1 00:06:40.111 --rc geninfo_unexecuted_blocks=1 00:06:40.111 00:06:40.111 ' 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.111 --rc genhtml_branch_coverage=1 00:06:40.111 --rc genhtml_function_coverage=1 00:06:40.111 --rc genhtml_legend=1 00:06:40.111 --rc geninfo_all_blocks=1 00:06:40.111 --rc geninfo_unexecuted_blocks=1 00:06:40.111 00:06:40.111 ' 00:06:40.111 17:22:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.111 17:22:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1479106 00:06:40.111 17:22:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1479106 00:06:40.111 17:22:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1479106 ']' 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.111 17:22:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.111 [2024-12-06 17:22:31.990898] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:40.111 [2024-12-06 17:22:31.990976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479106 ] 00:06:40.111 [2024-12-06 17:22:32.075142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.111 [2024-12-06 17:22:32.110234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.050 17:22:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:41.050 17:22:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1479106 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1479106 ']' 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1479106 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.050 17:22:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479106 00:06:41.050 17:22:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.050 17:22:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.050 17:22:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479106' 00:06:41.050 killing process with pid 1479106 00:06:41.050 17:22:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 1479106 00:06:41.050 17:22:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 1479106 00:06:41.310 00:06:41.310 real 0m1.503s 00:06:41.310 user 0m1.657s 00:06:41.310 sys 0m0.415s 00:06:41.310 17:22:33 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.310 17:22:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.310 ************************************ 00:06:41.310 END TEST alias_rpc 00:06:41.310 ************************************ 00:06:41.310 17:22:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:41.310 17:22:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:41.311 17:22:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.311 17:22:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.311 17:22:33 -- common/autotest_common.sh@10 -- # set +x 00:06:41.311 ************************************ 00:06:41.311 START TEST spdkcli_tcp 00:06:41.311 ************************************ 00:06:41.311 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:41.570 * Looking for test storage... 00:06:41.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:41.570 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.570 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.570 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.570 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.570 17:22:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.570 17:22:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.570 17:22:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.570 17:22:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.570 17:22:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.571 17:22:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.571 --rc genhtml_branch_coverage=1 00:06:41.571 --rc genhtml_function_coverage=1 00:06:41.571 --rc genhtml_legend=1 00:06:41.571 --rc geninfo_all_blocks=1 00:06:41.571 --rc geninfo_unexecuted_blocks=1 00:06:41.571 00:06:41.571 ' 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.571 --rc genhtml_branch_coverage=1 00:06:41.571 --rc genhtml_function_coverage=1 00:06:41.571 --rc genhtml_legend=1 00:06:41.571 --rc geninfo_all_blocks=1 00:06:41.571 --rc geninfo_unexecuted_blocks=1 00:06:41.571 00:06:41.571 ' 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.571 --rc genhtml_branch_coverage=1 00:06:41.571 --rc genhtml_function_coverage=1 00:06:41.571 --rc genhtml_legend=1 00:06:41.571 --rc geninfo_all_blocks=1 00:06:41.571 --rc geninfo_unexecuted_blocks=1 00:06:41.571 00:06:41.571 ' 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.571 --rc genhtml_branch_coverage=1 00:06:41.571 --rc genhtml_function_coverage=1 00:06:41.571 --rc genhtml_legend=1 00:06:41.571 --rc geninfo_all_blocks=1 00:06:41.571 --rc geninfo_unexecuted_blocks=1 00:06:41.571 00:06:41.571 ' 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1479451 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1479451 00:06:41.571 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1479451 ']' 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.571 17:22:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.571 [2024-12-06 17:22:33.571426] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:41.571 [2024-12-06 17:22:33.571495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479451 ] 00:06:41.831 [2024-12-06 17:22:33.637718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.831 [2024-12-06 17:22:33.671016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.831 [2024-12-06 17:22:33.671103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.831 17:22:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.831 17:22:33 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:41.831 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1479608 00:06:41.831 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:41.831 17:22:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:42.091 [ 00:06:42.091 "bdev_malloc_delete", 00:06:42.091 "bdev_malloc_create", 00:06:42.091 "bdev_null_resize", 00:06:42.091 "bdev_null_delete", 00:06:42.091 "bdev_null_create", 00:06:42.091 "bdev_nvme_cuse_unregister", 00:06:42.091 "bdev_nvme_cuse_register", 00:06:42.091 "bdev_opal_new_user", 00:06:42.091 "bdev_opal_set_lock_state", 00:06:42.091 "bdev_opal_delete", 00:06:42.091 "bdev_opal_get_info", 00:06:42.091 "bdev_opal_create", 00:06:42.091 "bdev_nvme_opal_revert", 00:06:42.091 "bdev_nvme_opal_init", 00:06:42.091 "bdev_nvme_send_cmd", 00:06:42.091 "bdev_nvme_set_keys", 00:06:42.091 "bdev_nvme_get_path_iostat", 00:06:42.091 "bdev_nvme_get_mdns_discovery_info", 00:06:42.091 "bdev_nvme_stop_mdns_discovery", 00:06:42.091 "bdev_nvme_start_mdns_discovery", 00:06:42.091 "bdev_nvme_set_multipath_policy", 00:06:42.091 "bdev_nvme_set_preferred_path", 00:06:42.091 "bdev_nvme_get_io_paths", 00:06:42.091 "bdev_nvme_remove_error_injection", 00:06:42.092 "bdev_nvme_add_error_injection", 00:06:42.092 "bdev_nvme_get_discovery_info", 00:06:42.092 "bdev_nvme_stop_discovery", 00:06:42.092 "bdev_nvme_start_discovery", 00:06:42.092 "bdev_nvme_get_controller_health_info", 00:06:42.092 "bdev_nvme_disable_controller", 00:06:42.092 "bdev_nvme_enable_controller", 00:06:42.092 "bdev_nvme_reset_controller", 00:06:42.092 "bdev_nvme_get_transport_statistics", 00:06:42.092 "bdev_nvme_apply_firmware", 00:06:42.092 "bdev_nvme_detach_controller", 00:06:42.092 "bdev_nvme_get_controllers", 00:06:42.092 "bdev_nvme_attach_controller", 00:06:42.092 "bdev_nvme_set_hotplug", 00:06:42.092 "bdev_nvme_set_options", 00:06:42.092 "bdev_passthru_delete", 00:06:42.092 "bdev_passthru_create", 00:06:42.092 "bdev_lvol_set_parent_bdev", 00:06:42.092 "bdev_lvol_set_parent", 00:06:42.092 "bdev_lvol_check_shallow_copy", 00:06:42.092 "bdev_lvol_start_shallow_copy", 00:06:42.092 "bdev_lvol_grow_lvstore", 00:06:42.092 "bdev_lvol_get_lvols", 00:06:42.092 "bdev_lvol_get_lvstores", 00:06:42.092 "bdev_lvol_delete", 00:06:42.092 "bdev_lvol_set_read_only", 00:06:42.092 "bdev_lvol_resize", 00:06:42.092 "bdev_lvol_decouple_parent", 00:06:42.092 "bdev_lvol_inflate", 00:06:42.092 "bdev_lvol_rename", 00:06:42.092 "bdev_lvol_clone_bdev", 00:06:42.092 "bdev_lvol_clone", 00:06:42.092 "bdev_lvol_snapshot", 00:06:42.092 "bdev_lvol_create", 00:06:42.092 "bdev_lvol_delete_lvstore", 00:06:42.092 "bdev_lvol_rename_lvstore", 00:06:42.092 "bdev_lvol_create_lvstore", 00:06:42.092 "bdev_raid_set_options", 00:06:42.092 "bdev_raid_remove_base_bdev", 00:06:42.092 "bdev_raid_add_base_bdev", 00:06:42.092 "bdev_raid_delete", 00:06:42.092 "bdev_raid_create", 00:06:42.092 "bdev_raid_get_bdevs", 00:06:42.092 "bdev_error_inject_error", 00:06:42.092 "bdev_error_delete", 00:06:42.092 "bdev_error_create", 00:06:42.092 "bdev_split_delete", 00:06:42.092 "bdev_split_create", 00:06:42.092 "bdev_delay_delete", 00:06:42.092 "bdev_delay_create", 00:06:42.092 "bdev_delay_update_latency", 00:06:42.092 "bdev_zone_block_delete", 00:06:42.092 "bdev_zone_block_create", 00:06:42.092 "blobfs_create", 00:06:42.092 "blobfs_detect", 00:06:42.092 "blobfs_set_cache_size", 00:06:42.092 "bdev_aio_delete", 00:06:42.092 "bdev_aio_rescan", 00:06:42.092 "bdev_aio_create", 00:06:42.092 "bdev_ftl_set_property", 00:06:42.092 "bdev_ftl_get_properties", 00:06:42.092 "bdev_ftl_get_stats", 00:06:42.092 "bdev_ftl_unmap", 00:06:42.092 "bdev_ftl_unload", 00:06:42.092 "bdev_ftl_delete", 00:06:42.092 "bdev_ftl_load", 00:06:42.092 "bdev_ftl_create", 00:06:42.092 "bdev_virtio_attach_controller", 00:06:42.092 "bdev_virtio_scsi_get_devices", 00:06:42.092 "bdev_virtio_detach_controller", 00:06:42.092 "bdev_virtio_blk_set_hotplug", 00:06:42.092 "bdev_iscsi_delete", 00:06:42.092 "bdev_iscsi_create", 00:06:42.092 "bdev_iscsi_set_options", 00:06:42.092 "accel_error_inject_error", 00:06:42.092 "ioat_scan_accel_module", 00:06:42.092 "dsa_scan_accel_module", 00:06:42.092 "iaa_scan_accel_module", 00:06:42.092 "vfu_virtio_create_fs_endpoint", 00:06:42.092 "vfu_virtio_create_scsi_endpoint", 00:06:42.092 "vfu_virtio_scsi_remove_target", 00:06:42.092 "vfu_virtio_scsi_add_target", 00:06:42.092 "vfu_virtio_create_blk_endpoint", 00:06:42.092 "vfu_virtio_delete_endpoint", 00:06:42.092 "keyring_file_remove_key", 00:06:42.092 "keyring_file_add_key", 00:06:42.092 "keyring_linux_set_options", 00:06:42.092 "fsdev_aio_delete", 00:06:42.092 "fsdev_aio_create", 00:06:42.092 "iscsi_get_histogram", 00:06:42.092 "iscsi_enable_histogram", 00:06:42.092 "iscsi_set_options", 00:06:42.092 "iscsi_get_auth_groups", 00:06:42.092 "iscsi_auth_group_remove_secret", 00:06:42.092 "iscsi_auth_group_add_secret", 00:06:42.092 "iscsi_delete_auth_group", 00:06:42.092 "iscsi_create_auth_group", 00:06:42.092 "iscsi_set_discovery_auth", 00:06:42.092 "iscsi_get_options", 00:06:42.092 "iscsi_target_node_request_logout", 00:06:42.092 "iscsi_target_node_set_redirect", 00:06:42.092 "iscsi_target_node_set_auth", 00:06:42.092 "iscsi_target_node_add_lun", 00:06:42.092 "iscsi_get_stats", 00:06:42.092 "iscsi_get_connections", 00:06:42.092 "iscsi_portal_group_set_auth", 00:06:42.092 "iscsi_start_portal_group", 00:06:42.092 "iscsi_delete_portal_group", 00:06:42.092 "iscsi_create_portal_group", 00:06:42.092 "iscsi_get_portal_groups", 00:06:42.092 "iscsi_delete_target_node", 00:06:42.092 "iscsi_target_node_remove_pg_ig_maps", 00:06:42.092 "iscsi_target_node_add_pg_ig_maps", 00:06:42.092 "iscsi_create_target_node", 00:06:42.092 "iscsi_get_target_nodes", 00:06:42.092 "iscsi_delete_initiator_group", 00:06:42.092 "iscsi_initiator_group_remove_initiators", 00:06:42.092 "iscsi_initiator_group_add_initiators", 00:06:42.092 "iscsi_create_initiator_group", 00:06:42.092 "iscsi_get_initiator_groups", 00:06:42.092 "nvmf_set_crdt", 00:06:42.092 "nvmf_set_config", 00:06:42.092 "nvmf_set_max_subsystems", 00:06:42.092 "nvmf_stop_mdns_prr", 00:06:42.092 "nvmf_publish_mdns_prr", 00:06:42.092 "nvmf_subsystem_get_listeners", 00:06:42.092 "nvmf_subsystem_get_qpairs", 00:06:42.092 "nvmf_subsystem_get_controllers", 00:06:42.092 "nvmf_get_stats", 00:06:42.092 "nvmf_get_transports", 00:06:42.092 "nvmf_create_transport", 00:06:42.092 "nvmf_get_targets", 00:06:42.092 "nvmf_delete_target", 00:06:42.092 "nvmf_create_target", 00:06:42.092 "nvmf_subsystem_allow_any_host", 00:06:42.092 "nvmf_subsystem_set_keys", 00:06:42.092 "nvmf_subsystem_remove_host", 00:06:42.092 "nvmf_subsystem_add_host", 00:06:42.092 "nvmf_ns_remove_host", 00:06:42.092 "nvmf_ns_add_host", 00:06:42.092 "nvmf_subsystem_remove_ns", 00:06:42.092 "nvmf_subsystem_set_ns_ana_group", 00:06:42.092 "nvmf_subsystem_add_ns", 00:06:42.092 "nvmf_subsystem_listener_set_ana_state", 00:06:42.092 "nvmf_discovery_get_referrals", 00:06:42.092 "nvmf_discovery_remove_referral", 00:06:42.092 "nvmf_discovery_add_referral", 00:06:42.092 "nvmf_subsystem_remove_listener", 00:06:42.092 "nvmf_subsystem_add_listener", 00:06:42.092 "nvmf_delete_subsystem", 00:06:42.092 "nvmf_create_subsystem", 00:06:42.092 "nvmf_get_subsystems", 00:06:42.092 "env_dpdk_get_mem_stats", 00:06:42.092 "nbd_get_disks", 00:06:42.092 "nbd_stop_disk", 00:06:42.092 "nbd_start_disk", 00:06:42.092 "ublk_recover_disk", 00:06:42.092 "ublk_get_disks", 00:06:42.092 "ublk_stop_disk", 00:06:42.092 "ublk_start_disk", 00:06:42.092 "ublk_destroy_target", 00:06:42.092 "ublk_create_target", 00:06:42.092 "virtio_blk_create_transport", 00:06:42.092 "virtio_blk_get_transports", 00:06:42.092 "vhost_controller_set_coalescing", 00:06:42.092 "vhost_get_controllers", 00:06:42.092 "vhost_delete_controller", 00:06:42.092 "vhost_create_blk_controller", 00:06:42.092 "vhost_scsi_controller_remove_target", 00:06:42.093 "vhost_scsi_controller_add_target", 00:06:42.093 "vhost_start_scsi_controller", 00:06:42.093 "vhost_create_scsi_controller", 00:06:42.093 "thread_set_cpumask", 00:06:42.093 "scheduler_set_options", 00:06:42.093 "framework_get_governor", 00:06:42.093 "framework_get_scheduler", 00:06:42.093 "framework_set_scheduler", 00:06:42.093 "framework_get_reactors", 00:06:42.093 "thread_get_io_channels", 00:06:42.093 "thread_get_pollers", 00:06:42.093 "thread_get_stats", 00:06:42.093 "framework_monitor_context_switch", 00:06:42.093 "spdk_kill_instance", 00:06:42.093 "log_enable_timestamps", 00:06:42.093 "log_get_flags", 00:06:42.093 "log_clear_flag", 00:06:42.093 "log_set_flag", 00:06:42.093 "log_get_level", 00:06:42.093 "log_set_level", 00:06:42.093 "log_get_print_level", 00:06:42.093 "log_set_print_level", 00:06:42.093 "framework_enable_cpumask_locks", 00:06:42.093 "framework_disable_cpumask_locks", 00:06:42.093 "framework_wait_init", 00:06:42.093 "framework_start_init", 00:06:42.093 "scsi_get_devices", 00:06:42.093 "bdev_get_histogram", 00:06:42.093 "bdev_enable_histogram", 00:06:42.093 "bdev_set_qos_limit", 00:06:42.093 "bdev_set_qd_sampling_period", 00:06:42.093 "bdev_get_bdevs", 00:06:42.093 "bdev_reset_iostat", 00:06:42.093 "bdev_get_iostat", 00:06:42.093 "bdev_examine", 00:06:42.093 "bdev_wait_for_examine", 00:06:42.093 "bdev_set_options", 00:06:42.093 "accel_get_stats", 00:06:42.093 "accel_set_options", 00:06:42.093 "accel_set_driver", 00:06:42.093 "accel_crypto_key_destroy", 00:06:42.093 "accel_crypto_keys_get", 00:06:42.093 "accel_crypto_key_create", 00:06:42.093 "accel_assign_opc", 00:06:42.093 "accel_get_module_info", 00:06:42.093 "accel_get_opc_assignments", 00:06:42.093 "vmd_rescan", 00:06:42.093 "vmd_remove_device", 00:06:42.093 "vmd_enable", 00:06:42.093 "sock_get_default_impl", 00:06:42.093 "sock_set_default_impl", 00:06:42.093 "sock_impl_set_options", 00:06:42.093 "sock_impl_get_options", 00:06:42.093 "iobuf_get_stats", 00:06:42.093 "iobuf_set_options", 00:06:42.093 "keyring_get_keys", 00:06:42.093 "vfu_tgt_set_base_path", 00:06:42.093 "framework_get_pci_devices", 00:06:42.093 "framework_get_config", 00:06:42.093 "framework_get_subsystems", 00:06:42.093 "fsdev_set_opts", 00:06:42.093 "fsdev_get_opts", 00:06:42.093 "trace_get_info", 00:06:42.093 "trace_get_tpoint_group_mask", 00:06:42.093 "trace_disable_tpoint_group", 00:06:42.093 "trace_enable_tpoint_group", 00:06:42.093 "trace_clear_tpoint_mask", 00:06:42.093 "trace_set_tpoint_mask", 00:06:42.093 "notify_get_notifications", 00:06:42.093 "notify_get_types", 00:06:42.093 "spdk_get_version", 00:06:42.093 "rpc_get_methods" 00:06:42.093 ] 00:06:42.093 17:22:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.093 17:22:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:42.093 17:22:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1479451 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1479451 ']' 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1479451 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479451 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479451' 00:06:42.093 killing process with pid 1479451 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1479451 00:06:42.093 17:22:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1479451 00:06:42.353 00:06:42.353 real 0m1.022s 00:06:42.353 user 0m1.728s 00:06:42.353 sys 0m0.431s 00:06:42.353 17:22:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.353 17:22:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.353 ************************************ 00:06:42.353 END TEST spdkcli_tcp 00:06:42.353 ************************************ 00:06:42.353 17:22:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:42.353 17:22:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.353 17:22:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.353 17:22:34 -- common/autotest_common.sh@10 -- # set +x 00:06:42.353 ************************************ 00:06:42.353 START TEST dpdk_mem_utility 00:06:42.353 ************************************ 00:06:42.353 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:42.613 * Looking for test storage... 00:06:42.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:42.613 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.613 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.613 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.613 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:42.613 17:22:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.614 17:22:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.614 --rc genhtml_branch_coverage=1 00:06:42.614 --rc genhtml_function_coverage=1 00:06:42.614 --rc genhtml_legend=1 00:06:42.614 --rc geninfo_all_blocks=1 00:06:42.614 --rc geninfo_unexecuted_blocks=1 00:06:42.614 00:06:42.614 ' 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.614 --rc genhtml_branch_coverage=1 00:06:42.614 --rc genhtml_function_coverage=1 00:06:42.614 --rc genhtml_legend=1 00:06:42.614 --rc geninfo_all_blocks=1 00:06:42.614 --rc geninfo_unexecuted_blocks=1 00:06:42.614 00:06:42.614 ' 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.614 --rc genhtml_branch_coverage=1 00:06:42.614 --rc genhtml_function_coverage=1 00:06:42.614 --rc genhtml_legend=1 00:06:42.614 --rc geninfo_all_blocks=1 00:06:42.614 --rc geninfo_unexecuted_blocks=1 00:06:42.614 00:06:42.614 ' 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.614 --rc genhtml_branch_coverage=1 00:06:42.614 --rc genhtml_function_coverage=1 00:06:42.614 --rc genhtml_legend=1 00:06:42.614 --rc geninfo_all_blocks=1 00:06:42.614 --rc geninfo_unexecuted_blocks=1 00:06:42.614 00:06:42.614 ' 00:06:42.614 17:22:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.614 17:22:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1479688 00:06:42.614 17:22:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1479688 00:06:42.614 17:22:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1479688 ']' 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.614 17:22:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.614 [2024-12-06 17:22:34.658401] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:42.614 [2024-12-06 17:22:34.658468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479688 ] 00:06:42.874 [2024-12-06 17:22:34.745575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.874 [2024-12-06 17:22:34.779044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.445 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.445 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:43.445 17:22:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:43.445 17:22:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:43.445 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.445 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.445 { 00:06:43.445 "filename": "/tmp/spdk_mem_dump.txt" 00:06:43.445 } 00:06:43.445 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.445 17:22:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:43.445 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:43.445 1 heaps totaling size 818.000000 MiB 00:06:43.445 size: 818.000000 MiB heap id: 0 00:06:43.445 end heaps---------- 00:06:43.445 9 mempools totaling size 603.782043 MiB 00:06:43.445 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:43.445 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:43.445 size: 100.555481 MiB name: bdev_io_1479688 00:06:43.445 size: 50.003479 MiB name: msgpool_1479688 00:06:43.445 size: 36.509338 MiB name: fsdev_io_1479688 00:06:43.445 size: 21.763794 MiB name: PDU_Pool 00:06:43.445 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:43.445 size: 4.133484 MiB name: evtpool_1479688 00:06:43.445 size: 0.026123 MiB name: Session_Pool 00:06:43.445 end mempools------- 00:06:43.445 6 memzones totaling size 4.142822 MiB 00:06:43.445 size: 1.000366 MiB name: RG_ring_0_1479688 00:06:43.445 size: 1.000366 MiB name: RG_ring_1_1479688 00:06:43.445 size: 1.000366 MiB name: RG_ring_4_1479688 00:06:43.445 size: 1.000366 MiB name: RG_ring_5_1479688 00:06:43.446 size: 0.125366 MiB name: RG_ring_2_1479688 00:06:43.446 size: 0.015991 MiB name: RG_ring_3_1479688 00:06:43.446 end memzones------- 00:06:43.446 17:22:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:43.706 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:43.706 list of free elements. size: 10.852478 MiB 00:06:43.706 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:43.706 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:43.706 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:43.706 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:43.706 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:43.706 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:43.706 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:43.706 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:43.706 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:43.706 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:43.706 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:43.706 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:43.706 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:43.706 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:43.706 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:43.706 list of standard malloc elements. size: 199.218628 MiB 00:06:43.706 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:43.706 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:43.706 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:43.706 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:43.706 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:43.706 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:43.706 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:43.706 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:43.706 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:43.706 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:43.706 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:43.706 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:43.706 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:43.706 list of memzone associated elements. size: 607.928894 MiB 00:06:43.706 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:43.706 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:43.706 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:43.706 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:43.706 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:43.706 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1479688_0 00:06:43.706 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:43.706 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1479688_0 00:06:43.706 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:43.706 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1479688_0 00:06:43.706 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:43.706 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:43.707 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:43.707 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:43.707 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:43.707 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1479688_0 00:06:43.707 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:43.707 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1479688 00:06:43.707 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:43.707 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1479688 00:06:43.707 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:43.707 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:43.707 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:43.707 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:43.707 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:43.707 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:43.707 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:43.707 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:43.707 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:43.707 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1479688 00:06:43.707 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:43.707 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1479688 00:06:43.707 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:43.707 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1479688 00:06:43.707 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:43.707 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1479688 00:06:43.707 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:43.707 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1479688 00:06:43.707 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:43.707 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1479688 00:06:43.707 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:43.707 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:43.707 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:43.707 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:43.707 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:43.707 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:43.707 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:43.707 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1479688 00:06:43.707 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:43.707 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1479688 00:06:43.707 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:43.707 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:43.707 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:43.707 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:43.707 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:43.707 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1479688 00:06:43.707 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:43.707 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:43.707 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:43.707 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1479688 00:06:43.707 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:43.707 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1479688 00:06:43.707 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:43.707 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1479688 00:06:43.707 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:43.707 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:43.707 17:22:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:43.707 17:22:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1479688 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1479688 ']' 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1479688 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479688 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479688' 00:06:43.707 killing process with pid 1479688 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1479688 00:06:43.707 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1479688 00:06:43.968 00:06:43.968 real 0m1.400s 00:06:43.968 user 0m1.470s 00:06:43.968 sys 0m0.427s 00:06:43.968 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.968 17:22:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.968 ************************************ 00:06:43.968 END TEST dpdk_mem_utility 00:06:43.968 ************************************ 00:06:43.968 17:22:35 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:43.968 17:22:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.968 17:22:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.968 17:22:35 -- common/autotest_common.sh@10 -- # set +x 00:06:43.968 ************************************ 00:06:43.968 START TEST event 00:06:43.968 ************************************ 00:06:43.968 17:22:35 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:43.968 * Looking for test storage... 00:06:43.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:43.968 17:22:35 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.968 17:22:35 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.968 17:22:35 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.228 17:22:36 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.228 17:22:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.228 17:22:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.228 17:22:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.228 17:22:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.228 17:22:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.228 17:22:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.228 17:22:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.228 17:22:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.228 17:22:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.228 17:22:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.228 17:22:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.228 17:22:36 event -- scripts/common.sh@344 -- # case "$op" in 00:06:44.228 17:22:36 event -- scripts/common.sh@345 -- # : 1 00:06:44.228 17:22:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.228 17:22:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.228 17:22:36 event -- scripts/common.sh@365 -- # decimal 1 00:06:44.229 17:22:36 event -- scripts/common.sh@353 -- # local d=1 00:06:44.229 17:22:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.229 17:22:36 event -- scripts/common.sh@355 -- # echo 1 00:06:44.229 17:22:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.229 17:22:36 event -- scripts/common.sh@366 -- # decimal 2 00:06:44.229 17:22:36 event -- scripts/common.sh@353 -- # local d=2 00:06:44.229 17:22:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.229 17:22:36 event -- scripts/common.sh@355 -- # echo 2 00:06:44.229 17:22:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.229 17:22:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.229 17:22:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.229 17:22:36 event -- scripts/common.sh@368 -- # return 0 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.229 --rc genhtml_branch_coverage=1 00:06:44.229 --rc genhtml_function_coverage=1 00:06:44.229 --rc genhtml_legend=1 00:06:44.229 --rc geninfo_all_blocks=1 00:06:44.229 --rc geninfo_unexecuted_blocks=1 00:06:44.229 00:06:44.229 ' 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.229 --rc genhtml_branch_coverage=1 00:06:44.229 --rc genhtml_function_coverage=1 00:06:44.229 --rc genhtml_legend=1 00:06:44.229 --rc geninfo_all_blocks=1 00:06:44.229 --rc geninfo_unexecuted_blocks=1 00:06:44.229 00:06:44.229 ' 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.229 --rc genhtml_branch_coverage=1 00:06:44.229 --rc genhtml_function_coverage=1 00:06:44.229 --rc genhtml_legend=1 00:06:44.229 --rc geninfo_all_blocks=1 00:06:44.229 --rc geninfo_unexecuted_blocks=1 00:06:44.229 00:06:44.229 ' 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.229 --rc genhtml_branch_coverage=1 00:06:44.229 --rc genhtml_function_coverage=1 00:06:44.229 --rc genhtml_legend=1 00:06:44.229 --rc geninfo_all_blocks=1 00:06:44.229 --rc geninfo_unexecuted_blocks=1 00:06:44.229 00:06:44.229 ' 00:06:44.229 17:22:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:44.229 17:22:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.229 17:22:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:44.229 17:22:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.229 17:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.229 ************************************ 00:06:44.229 START TEST event_perf 00:06:44.229 ************************************ 00:06:44.229 17:22:36 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.229 Running I/O for 1 seconds...[2024-12-06 17:22:36.135251] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:44.229 [2024-12-06 17:22:36.135346] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480087 ] 00:06:44.229 [2024-12-06 17:22:36.222927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.229 [2024-12-06 17:22:36.258482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.229 [2024-12-06 17:22:36.258634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.229 [2024-12-06 17:22:36.258785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.229 [2024-12-06 17:22:36.258878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.615 Running I/O for 1 seconds... 00:06:45.615 lcore 0: 175143 00:06:45.615 lcore 1: 175145 00:06:45.615 lcore 2: 175145 00:06:45.615 lcore 3: 175144 00:06:45.615 done. 00:06:45.615 00:06:45.615 real 0m1.173s 00:06:45.615 user 0m4.092s 00:06:45.615 sys 0m0.077s 00:06:45.615 17:22:37 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.615 17:22:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.615 ************************************ 00:06:45.615 END TEST event_perf 00:06:45.615 ************************************ 00:06:45.615 17:22:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:45.615 17:22:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:45.615 17:22:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.615 17:22:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.615 ************************************ 00:06:45.615 START TEST event_reactor 00:06:45.615 ************************************ 00:06:45.615 17:22:37 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:45.615 [2024-12-06 17:22:37.385194] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:45.615 [2024-12-06 17:22:37.385275] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480444 ] 00:06:45.615 [2024-12-06 17:22:37.475889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.615 [2024-12-06 17:22:37.511435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.556 test_start 00:06:46.556 oneshot 00:06:46.556 tick 100 00:06:46.556 tick 100 00:06:46.556 tick 250 00:06:46.556 tick 100 00:06:46.556 tick 100 00:06:46.556 tick 100 00:06:46.556 tick 250 00:06:46.556 tick 500 00:06:46.556 tick 100 00:06:46.556 tick 100 00:06:46.556 tick 250 00:06:46.556 tick 100 00:06:46.556 tick 100 00:06:46.556 test_end 00:06:46.556 00:06:46.556 real 0m1.173s 00:06:46.556 user 0m1.090s 00:06:46.556 sys 0m0.079s 00:06:46.556 17:22:38 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.556 17:22:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:46.556 ************************************ 00:06:46.556 END TEST event_reactor 00:06:46.556 ************************************ 00:06:46.556 17:22:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.556 17:22:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:46.556 17:22:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.556 17:22:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.556 ************************************ 00:06:46.556 START TEST event_reactor_perf 00:06:46.556 ************************************ 00:06:46.556 17:22:38 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.816 [2024-12-06 17:22:38.638335] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:46.816 [2024-12-06 17:22:38.638442] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480681 ] 00:06:46.816 [2024-12-06 17:22:38.727185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.816 [2024-12-06 17:22:38.768271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.760 test_start 00:06:47.760 test_end 00:06:47.760 Performance: 541730 events per second 00:06:47.760 00:06:47.760 real 0m1.177s 00:06:47.760 user 0m1.098s 00:06:47.760 sys 0m0.075s 00:06:47.760 17:22:39 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.760 17:22:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.760 ************************************ 00:06:47.760 END TEST event_reactor_perf 00:06:47.760 ************************************ 00:06:48.022 17:22:39 event -- event/event.sh@49 -- # uname -s 00:06:48.022 17:22:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:48.022 17:22:39 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:48.022 17:22:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.022 17:22:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.022 17:22:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.022 ************************************ 00:06:48.022 START TEST event_scheduler 00:06:48.022 ************************************ 00:06:48.022 17:22:39 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:48.022 * Looking for test storage... 00:06:48.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:48.022 17:22:39 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.022 17:22:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.022 17:22:39 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.022 17:22:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.022 17:22:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:48.022 17:22:40 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.022 17:22:40 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.022 --rc genhtml_branch_coverage=1 00:06:48.022 --rc genhtml_function_coverage=1 00:06:48.022 --rc genhtml_legend=1 00:06:48.022 --rc geninfo_all_blocks=1 00:06:48.022 --rc geninfo_unexecuted_blocks=1 00:06:48.023 00:06:48.023 ' 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.023 --rc genhtml_branch_coverage=1 00:06:48.023 --rc genhtml_function_coverage=1 00:06:48.023 --rc genhtml_legend=1 00:06:48.023 --rc geninfo_all_blocks=1 00:06:48.023 --rc geninfo_unexecuted_blocks=1 00:06:48.023 00:06:48.023 ' 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.023 --rc genhtml_branch_coverage=1 00:06:48.023 --rc genhtml_function_coverage=1 00:06:48.023 --rc genhtml_legend=1 00:06:48.023 --rc geninfo_all_blocks=1 00:06:48.023 --rc geninfo_unexecuted_blocks=1 00:06:48.023 00:06:48.023 ' 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.023 --rc genhtml_branch_coverage=1 00:06:48.023 --rc genhtml_function_coverage=1 00:06:48.023 --rc genhtml_legend=1 00:06:48.023 --rc geninfo_all_blocks=1 00:06:48.023 --rc geninfo_unexecuted_blocks=1 00:06:48.023 00:06:48.023 ' 00:06:48.023 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:48.023 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1480950 00:06:48.023 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.023 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1480950 00:06:48.023 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1480950 ']' 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.023 17:22:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.284 [2024-12-06 17:22:40.135699] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:48.284 [2024-12-06 17:22:40.135780] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480950 ] 00:06:48.284 [2024-12-06 17:22:40.227929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.284 [2024-12-06 17:22:40.284151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.284 [2024-12-06 17:22:40.284289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.284 [2024-12-06 17:22:40.284459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.284 [2024-12-06 17:22:40.284459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:49.228 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.228 [2024-12-06 17:22:40.958838] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:49.228 [2024-12-06 17:22:40.958858] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:49.228 [2024-12-06 17:22:40.958869] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:49.228 [2024-12-06 17:22:40.958875] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:49.228 [2024-12-06 17:22:40.958880] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.228 17:22:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.228 17:22:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.228 [2024-12-06 17:22:41.026308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:49.228 17:22:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.228 17:22:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:49.228 17:22:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.228 17:22:41 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 ************************************ 00:06:49.229 START TEST scheduler_create_thread 00:06:49.229 ************************************ 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 2 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 3 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 4 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 5 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 6 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 7 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 8 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.229 9 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.229 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.802 10 00:06:49.802 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.802 17:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:49.802 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.802 17:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.189 17:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.189 17:22:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:51.189 17:22:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:51.189 17:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.189 17:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.762 17:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.762 17:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:51.762 17:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.762 17:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.705 17:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.705 17:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:52.705 17:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:52.705 17:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.705 17:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.276 17:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.276 00:06:53.276 real 0m4.224s 00:06:53.276 user 0m0.025s 00:06:53.276 sys 0m0.006s 00:06:53.276 17:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.276 17:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.276 ************************************ 00:06:53.276 END TEST scheduler_create_thread 00:06:53.276 ************************************ 00:06:53.276 17:22:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:53.276 17:22:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1480950 00:06:53.276 17:22:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1480950 ']' 00:06:53.276 17:22:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1480950 00:06:53.276 17:22:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:53.276 17:22:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.276 17:22:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480950 00:06:53.535 17:22:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:53.535 17:22:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:53.535 17:22:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480950' 00:06:53.535 killing process with pid 1480950 00:06:53.535 17:22:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1480950 00:06:53.535 17:22:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1480950 00:06:53.535 [2024-12-06 17:22:45.567976] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:53.797 00:06:53.797 real 0m5.848s 00:06:53.797 user 0m12.916s 00:06:53.797 sys 0m0.423s 00:06:53.797 17:22:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.797 17:22:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:53.797 ************************************ 00:06:53.797 END TEST event_scheduler 00:06:53.797 ************************************ 00:06:53.797 17:22:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:53.797 17:22:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:53.797 17:22:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.797 17:22:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.797 17:22:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.797 ************************************ 00:06:53.797 START TEST app_repeat 00:06:53.797 ************************************ 00:06:53.797 17:22:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1482252 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1482252' 00:06:53.797 Process app_repeat pid: 1482252 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:53.797 spdk_app_start Round 0 00:06:53.797 17:22:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1482252 /var/tmp/spdk-nbd.sock 00:06:53.797 17:22:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1482252 ']' 00:06:53.797 17:22:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.797 17:22:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.798 17:22:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.798 17:22:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.798 17:22:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.798 [2024-12-06 17:22:45.847170] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:06:53.798 [2024-12-06 17:22:45.847228] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482252 ] 00:06:54.059 [2024-12-06 17:22:45.931028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.059 [2024-12-06 17:22:45.963832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.059 [2024-12-06 17:22:45.963923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.059 17:22:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.059 17:22:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:54.059 17:22:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.319 Malloc0 00:06:54.319 17:22:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.580 Malloc1 00:06:54.580 17:22:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.580 /dev/nbd0 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.580 17:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.580 17:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.841 1+0 records in 00:06:54.841 1+0 records out 00:06:54.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023052 s, 17.8 MB/s 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.841 /dev/nbd1 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.841 1+0 records in 00:06:54.841 1+0 records out 00:06:54.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278871 s, 14.7 MB/s 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.841 17:22:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.841 17:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.101 { 00:06:55.101 "nbd_device": "/dev/nbd0", 00:06:55.101 "bdev_name": "Malloc0" 00:06:55.101 }, 00:06:55.101 { 00:06:55.101 "nbd_device": "/dev/nbd1", 00:06:55.101 "bdev_name": "Malloc1" 00:06:55.101 } 00:06:55.101 ]' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.101 { 00:06:55.101 "nbd_device": "/dev/nbd0", 00:06:55.101 "bdev_name": "Malloc0" 00:06:55.101 }, 00:06:55.101 { 00:06:55.101 "nbd_device": "/dev/nbd1", 00:06:55.101 "bdev_name": "Malloc1" 00:06:55.101 } 00:06:55.101 ]' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.101 /dev/nbd1' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.101 /dev/nbd1' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.101 256+0 records in 00:06:55.101 256+0 records out 00:06:55.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127516 s, 82.2 MB/s 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.101 256+0 records in 00:06:55.101 256+0 records out 00:06:55.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125372 s, 83.6 MB/s 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.101 17:22:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.362 256+0 records in 00:06:55.362 256+0 records out 00:06:55.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136344 s, 76.9 MB/s 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.362 17:22:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.363 17:22:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.622 17:22:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.623 17:22:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.882 17:22:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.882 17:22:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.142 17:22:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.142 [2024-12-06 17:22:48.095158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.142 [2024-12-06 17:22:48.122968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.142 [2024-12-06 17:22:48.122969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.142 [2024-12-06 17:22:48.152218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.142 [2024-12-06 17:22:48.152252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.436 17:22:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.436 17:22:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:59.436 spdk_app_start Round 1 00:06:59.436 17:22:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1482252 /var/tmp/spdk-nbd.sock 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1482252 ']' 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.436 17:22:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:59.436 17:22:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.436 Malloc0 00:06:59.436 17:22:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.697 Malloc1 00:06:59.697 17:22:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.697 17:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.959 /dev/nbd0 00:06:59.959 17:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.959 17:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.959 1+0 records in 00:06:59.959 1+0 records out 00:06:59.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282136 s, 14.5 MB/s 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.959 17:22:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.959 17:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.959 17:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.959 17:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.959 /dev/nbd1 00:06:59.959 17:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.959 17:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.959 1+0 records in 00:06:59.959 1+0 records out 00:06:59.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209066 s, 19.6 MB/s 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.959 17:22:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.221 17:22:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.221 17:22:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd0", 00:07:00.221 "bdev_name": "Malloc0" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd1", 00:07:00.221 "bdev_name": "Malloc1" 00:07:00.221 } 00:07:00.221 ]' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd0", 00:07:00.221 "bdev_name": "Malloc0" 00:07:00.221 }, 00:07:00.221 { 00:07:00.221 "nbd_device": "/dev/nbd1", 00:07:00.221 "bdev_name": "Malloc1" 00:07:00.221 } 00:07:00.221 ]' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.221 /dev/nbd1' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.221 /dev/nbd1' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.221 17:22:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.482 256+0 records in 00:07:00.482 256+0 records out 00:07:00.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124241 s, 84.4 MB/s 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.482 256+0 records in 00:07:00.482 256+0 records out 00:07:00.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011938 s, 87.8 MB/s 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.482 256+0 records in 00:07:00.482 256+0 records out 00:07:00.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133742 s, 78.4 MB/s 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.482 17:22:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.742 17:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.002 17:22:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.002 17:22:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.264 17:22:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.264 [2024-12-06 17:22:53.223893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.264 [2024-12-06 17:22:53.251500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.264 [2024-12-06 17:22:53.251500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.264 [2024-12-06 17:22:53.281169] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.264 [2024-12-06 17:22:53.281201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.564 17:22:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.564 17:22:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:04.564 spdk_app_start Round 2 00:07:04.564 17:22:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1482252 /var/tmp/spdk-nbd.sock 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1482252 ']' 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.564 17:22:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.564 17:22:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.564 Malloc0 00:07:04.564 17:22:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.825 Malloc1 00:07:04.825 17:22:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.825 17:22:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.825 /dev/nbd0 00:07:05.086 17:22:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.086 17:22:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.086 1+0 records in 00:07:05.086 1+0 records out 00:07:05.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313628 s, 13.1 MB/s 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.086 17:22:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.086 17:22:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.086 17:22:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.086 17:22:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.086 /dev/nbd1 00:07:05.086 17:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.086 17:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.086 1+0 records in 00:07:05.086 1+0 records out 00:07:05.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283706 s, 14.4 MB/s 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.086 17:22:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.086 17:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.086 17:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.086 17:22:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.086 17:22:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.348 { 00:07:05.348 "nbd_device": "/dev/nbd0", 00:07:05.348 "bdev_name": "Malloc0" 00:07:05.348 }, 00:07:05.348 { 00:07:05.348 "nbd_device": "/dev/nbd1", 00:07:05.348 "bdev_name": "Malloc1" 00:07:05.348 } 00:07:05.348 ]' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.348 { 00:07:05.348 "nbd_device": "/dev/nbd0", 00:07:05.348 "bdev_name": "Malloc0" 00:07:05.348 }, 00:07:05.348 { 00:07:05.348 "nbd_device": "/dev/nbd1", 00:07:05.348 "bdev_name": "Malloc1" 00:07:05.348 } 00:07:05.348 ]' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.348 /dev/nbd1' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.348 /dev/nbd1' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.348 256+0 records in 00:07:05.348 256+0 records out 00:07:05.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127077 s, 82.5 MB/s 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.348 256+0 records in 00:07:05.348 256+0 records out 00:07:05.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124179 s, 84.4 MB/s 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.348 17:22:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.609 256+0 records in 00:07:05.609 256+0 records out 00:07:05.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128387 s, 81.7 MB/s 00:07:05.609 17:22:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.609 17:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.610 17:22:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.870 17:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.131 17:22:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.131 17:22:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.393 17:22:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.393 [2024-12-06 17:22:58.396953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.393 [2024-12-06 17:22:58.424795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.393 [2024-12-06 17:22:58.424883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.393 [2024-12-06 17:22:58.453971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.393 [2024-12-06 17:22:58.454004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.698 17:23:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1482252 /var/tmp/spdk-nbd.sock 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1482252 ']' 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:09.698 17:23:01 event.app_repeat -- event/event.sh@39 -- # killprocess 1482252 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1482252 ']' 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1482252 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482252 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482252' 00:07:09.698 killing process with pid 1482252 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1482252 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1482252 00:07:09.698 spdk_app_start is called in Round 0. 00:07:09.698 Shutdown signal received, stop current app iteration 00:07:09.698 Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 reinitialization... 00:07:09.698 spdk_app_start is called in Round 1. 00:07:09.698 Shutdown signal received, stop current app iteration 00:07:09.698 Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 reinitialization... 00:07:09.698 spdk_app_start is called in Round 2. 00:07:09.698 Shutdown signal received, stop current app iteration 00:07:09.698 Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 reinitialization... 00:07:09.698 spdk_app_start is called in Round 3. 00:07:09.698 Shutdown signal received, stop current app iteration 00:07:09.698 17:23:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:09.698 17:23:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:09.698 00:07:09.698 real 0m15.852s 00:07:09.698 user 0m34.889s 00:07:09.698 sys 0m2.296s 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.698 17:23:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 ************************************ 00:07:09.698 END TEST app_repeat 00:07:09.698 ************************************ 00:07:09.698 17:23:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:09.698 17:23:01 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:09.698 17:23:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.698 17:23:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.698 17:23:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 ************************************ 00:07:09.698 START TEST cpu_locks 00:07:09.698 ************************************ 00:07:09.698 17:23:01 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:09.960 * Looking for test storage... 00:07:09.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:09.960 17:23:01 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.960 17:23:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.960 17:23:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.960 17:23:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.960 17:23:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.961 17:23:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.961 --rc genhtml_branch_coverage=1 00:07:09.961 --rc genhtml_function_coverage=1 00:07:09.961 --rc genhtml_legend=1 00:07:09.961 --rc geninfo_all_blocks=1 00:07:09.961 --rc geninfo_unexecuted_blocks=1 00:07:09.961 00:07:09.961 ' 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.961 --rc genhtml_branch_coverage=1 00:07:09.961 --rc genhtml_function_coverage=1 00:07:09.961 --rc genhtml_legend=1 00:07:09.961 --rc geninfo_all_blocks=1 00:07:09.961 --rc geninfo_unexecuted_blocks=1 00:07:09.961 00:07:09.961 ' 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.961 --rc genhtml_branch_coverage=1 00:07:09.961 --rc genhtml_function_coverage=1 00:07:09.961 --rc genhtml_legend=1 00:07:09.961 --rc geninfo_all_blocks=1 00:07:09.961 --rc geninfo_unexecuted_blocks=1 00:07:09.961 00:07:09.961 ' 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.961 --rc genhtml_branch_coverage=1 00:07:09.961 --rc genhtml_function_coverage=1 00:07:09.961 --rc genhtml_legend=1 00:07:09.961 --rc geninfo_all_blocks=1 00:07:09.961 --rc geninfo_unexecuted_blocks=1 00:07:09.961 00:07:09.961 ' 00:07:09.961 17:23:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:09.961 17:23:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:09.961 17:23:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:09.961 17:23:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.961 17:23:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.961 ************************************ 00:07:09.961 START TEST default_locks 00:07:09.961 ************************************ 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1485528 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1485528 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1485528 ']' 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.961 17:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.222 [2024-12-06 17:23:02.034456] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:10.222 [2024-12-06 17:23:02.034524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485528 ] 00:07:10.222 [2024-12-06 17:23:02.119844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.222 [2024-12-06 17:23:02.157152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.794 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.794 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:10.794 17:23:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1485528 00:07:10.794 17:23:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1485528 00:07:10.794 17:23:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.054 lslocks: write error 00:07:11.054 17:23:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1485528 00:07:11.054 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1485528 ']' 00:07:11.054 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1485528 00:07:11.054 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:11.054 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.054 17:23:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485528 00:07:11.054 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.054 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.055 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485528' 00:07:11.055 killing process with pid 1485528 00:07:11.055 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1485528 00:07:11.055 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1485528 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1485528 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1485528 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1485528 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1485528 ']' 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1485528) - No such process 00:07:11.317 ERROR: process (pid: 1485528) is no longer running 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.317 00:07:11.317 real 0m1.240s 00:07:11.317 user 0m1.334s 00:07:11.317 sys 0m0.419s 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.317 17:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.317 ************************************ 00:07:11.317 END TEST default_locks 00:07:11.317 ************************************ 00:07:11.317 17:23:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.317 17:23:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.317 17:23:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.317 17:23:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.317 ************************************ 00:07:11.317 START TEST default_locks_via_rpc 00:07:11.317 ************************************ 00:07:11.317 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:11.317 17:23:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1485889 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1485889 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1485889 ']' 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.318 17:23:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.318 [2024-12-06 17:23:03.348149] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:11.318 [2024-12-06 17:23:03.348206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485889 ] 00:07:11.580 [2024-12-06 17:23:03.437091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.580 [2024-12-06 17:23:03.472782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1485889 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1485889 00:07:12.152 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1485889 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1485889 ']' 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1485889 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485889 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485889' 00:07:12.412 killing process with pid 1485889 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1485889 00:07:12.412 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1485889 00:07:12.673 00:07:12.673 real 0m1.345s 00:07:12.673 user 0m1.464s 00:07:12.673 sys 0m0.453s 00:07:12.673 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.673 17:23:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.673 ************************************ 00:07:12.673 END TEST default_locks_via_rpc 00:07:12.673 ************************************ 00:07:12.673 17:23:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:12.673 17:23:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.673 17:23:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.673 17:23:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.673 ************************************ 00:07:12.673 START TEST non_locking_app_on_locked_coremask 00:07:12.673 ************************************ 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1486248 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1486248 /var/tmp/spdk.sock 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1486248 ']' 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.673 17:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.933 [2024-12-06 17:23:04.770084] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:12.933 [2024-12-06 17:23:04.770142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486248 ] 00:07:12.933 [2024-12-06 17:23:04.855153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.933 [2024-12-06 17:23:04.888208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1486320 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1486320 /var/tmp/spdk2.sock 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1486320 ']' 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.502 17:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.763 [2024-12-06 17:23:05.617658] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:13.763 [2024-12-06 17:23:05.617713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486320 ] 00:07:13.763 [2024-12-06 17:23:05.701916] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.763 [2024-12-06 17:23:05.701940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.763 [2024-12-06 17:23:05.764118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.333 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.333 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.333 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1486248 00:07:14.333 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1486248 00:07:14.333 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.273 lslocks: write error 00:07:15.273 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1486248 00:07:15.273 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1486248 ']' 00:07:15.273 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1486248 00:07:15.273 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.273 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.273 17:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486248 00:07:15.273 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.273 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.273 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486248' 00:07:15.273 killing process with pid 1486248 00:07:15.273 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1486248 00:07:15.273 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1486248 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1486320 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1486320 ']' 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1486320 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486320 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486320' 00:07:15.534 killing process with pid 1486320 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1486320 00:07:15.534 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1486320 00:07:15.795 00:07:15.795 real 0m2.946s 00:07:15.795 user 0m3.266s 00:07:15.795 sys 0m0.934s 00:07:15.795 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.795 17:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.795 ************************************ 00:07:15.795 END TEST non_locking_app_on_locked_coremask 00:07:15.795 ************************************ 00:07:15.795 17:23:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:15.795 17:23:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.795 17:23:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.795 17:23:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.795 ************************************ 00:07:15.795 START TEST locking_app_on_unlocked_coremask 00:07:15.795 ************************************ 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1486935 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1486935 /var/tmp/spdk.sock 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1486935 ']' 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.795 17:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.795 [2024-12-06 17:23:07.794607] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:15.795 [2024-12-06 17:23:07.794673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486935 ] 00:07:16.056 [2024-12-06 17:23:07.877386] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.056 [2024-12-06 17:23:07.877414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.056 [2024-12-06 17:23:07.911829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1486975 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1486975 /var/tmp/spdk2.sock 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1486975 ']' 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.626 17:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.626 [2024-12-06 17:23:08.614284] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:16.626 [2024-12-06 17:23:08.614335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486975 ] 00:07:16.887 [2024-12-06 17:23:08.699041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.887 [2024-12-06 17:23:08.758200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.618 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.618 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:17.618 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1486975 00:07:17.618 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1486975 00:07:17.618 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.878 lslocks: write error 00:07:17.878 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1486935 00:07:17.878 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1486935 ']' 00:07:17.878 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1486935 00:07:17.878 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.878 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.878 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486935 00:07:18.138 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.138 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.138 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486935' 00:07:18.138 killing process with pid 1486935 00:07:18.138 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1486935 00:07:18.138 17:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1486935 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1486975 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1486975 ']' 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1486975 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486975 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486975' 00:07:18.399 killing process with pid 1486975 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1486975 00:07:18.399 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1486975 00:07:18.681 00:07:18.681 real 0m2.849s 00:07:18.681 user 0m3.166s 00:07:18.681 sys 0m0.858s 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.681 ************************************ 00:07:18.681 END TEST locking_app_on_unlocked_coremask 00:07:18.681 ************************************ 00:07:18.681 17:23:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:18.681 17:23:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.681 17:23:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.681 17:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.681 ************************************ 00:07:18.681 START TEST locking_app_on_locked_coremask 00:07:18.681 ************************************ 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1487419 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1487419 /var/tmp/spdk.sock 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1487419 ']' 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.681 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.681 [2024-12-06 17:23:10.718702] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:18.681 [2024-12-06 17:23:10.718747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487419 ] 00:07:18.943 [2024-12-06 17:23:10.767966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.943 [2024-12-06 17:23:10.797839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.943 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.943 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.943 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1487579 00:07:18.943 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1487579 /var/tmp/spdk2.sock 00:07:18.943 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:18.943 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1487579 /var/tmp/spdk2.sock 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1487579 /var/tmp/spdk2.sock 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1487579 ']' 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.944 17:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.204 [2024-12-06 17:23:11.038455] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:19.204 [2024-12-06 17:23:11.038507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487579 ] 00:07:19.204 [2024-12-06 17:23:11.122548] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1487419 has claimed it. 00:07:19.204 [2024-12-06 17:23:11.122578] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1487579) - No such process 00:07:19.774 ERROR: process (pid: 1487579) is no longer running 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1487419 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1487419 00:07:19.774 17:23:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.043 lslocks: write error 00:07:20.043 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1487419 00:07:20.043 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1487419 ']' 00:07:20.044 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1487419 00:07:20.044 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.044 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.044 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487419 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487419' 00:07:20.307 killing process with pid 1487419 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1487419 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1487419 00:07:20.307 00:07:20.307 real 0m1.664s 00:07:20.307 user 0m1.835s 00:07:20.307 sys 0m0.572s 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.307 17:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.307 ************************************ 00:07:20.307 END TEST locking_app_on_locked_coremask 00:07:20.307 ************************************ 00:07:20.307 17:23:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:20.307 17:23:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.307 17:23:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.307 17:23:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.567 ************************************ 00:07:20.567 START TEST locking_overlapped_coremask 00:07:20.567 ************************************ 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1487801 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1487801 /var/tmp/spdk.sock 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1487801 ']' 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.567 17:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.567 [2024-12-06 17:23:12.460622] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:20.567 [2024-12-06 17:23:12.460690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487801 ] 00:07:20.567 [2024-12-06 17:23:12.547460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.567 [2024-12-06 17:23:12.583786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.567 [2024-12-06 17:23:12.584036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.567 [2024-12-06 17:23:12.584037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1488056 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1488056 /var/tmp/spdk2.sock 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:21.526 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1488056 /var/tmp/spdk2.sock 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1488056 /var/tmp/spdk2.sock 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1488056 ']' 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.527 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.527 [2024-12-06 17:23:13.321263] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:21.527 [2024-12-06 17:23:13.321318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488056 ] 00:07:21.527 [2024-12-06 17:23:13.431641] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1487801 has claimed it. 00:07:21.527 [2024-12-06 17:23:13.431681] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:22.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1488056) - No such process 00:07:22.098 ERROR: process (pid: 1488056) is no longer running 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1487801 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1487801 ']' 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1487801 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487801 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.098 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487801' 00:07:22.099 killing process with pid 1487801 00:07:22.099 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1487801 00:07:22.099 17:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1487801 00:07:22.360 00:07:22.360 real 0m1.783s 00:07:22.360 user 0m5.163s 00:07:22.360 sys 0m0.389s 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.360 ************************************ 00:07:22.360 END TEST locking_overlapped_coremask 00:07:22.360 ************************************ 00:07:22.360 17:23:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:22.360 17:23:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.360 17:23:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.360 17:23:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.360 ************************************ 00:07:22.360 START TEST locking_overlapped_coremask_via_rpc 00:07:22.360 ************************************ 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1488258 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1488258 /var/tmp/spdk.sock 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1488258 ']' 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.360 17:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.360 [2024-12-06 17:23:14.320882] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:22.360 [2024-12-06 17:23:14.320936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488258 ] 00:07:22.360 [2024-12-06 17:23:14.407012] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.360 [2024-12-06 17:23:14.407060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.621 [2024-12-06 17:23:14.449703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.621 [2024-12-06 17:23:14.450032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.621 [2024-12-06 17:23:14.450033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1488432 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1488432 /var/tmp/spdk2.sock 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1488432 ']' 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.192 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.192 [2024-12-06 17:23:15.178993] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:23.192 [2024-12-06 17:23:15.179047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488432 ] 00:07:23.453 [2024-12-06 17:23:15.290958] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.453 [2024-12-06 17:23:15.290992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.453 [2024-12-06 17:23:15.370098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.453 [2024-12-06 17:23:15.370256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.453 [2024-12-06 17:23:15.370258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:24.023 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.024 [2024-12-06 17:23:15.978726] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1488258 has claimed it. 00:07:24.024 request: 00:07:24.024 { 00:07:24.024 "method": "framework_enable_cpumask_locks", 00:07:24.024 "req_id": 1 00:07:24.024 } 00:07:24.024 Got JSON-RPC error response 00:07:24.024 response: 00:07:24.024 { 00:07:24.024 "code": -32603, 00:07:24.024 "message": "Failed to claim CPU core: 2" 00:07:24.024 } 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1488258 /var/tmp/spdk.sock 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1488258 ']' 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.024 17:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1488432 /var/tmp/spdk2.sock 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1488432 ']' 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.284 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.544 00:07:24.544 real 0m2.091s 00:07:24.544 user 0m0.871s 00:07:24.544 sys 0m0.146s 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.544 17:23:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 ************************************ 00:07:24.544 END TEST locking_overlapped_coremask_via_rpc 00:07:24.544 ************************************ 00:07:24.544 17:23:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:24.544 17:23:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1488258 ]] 00:07:24.544 17:23:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1488258 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1488258 ']' 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1488258 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488258 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488258' 00:07:24.545 killing process with pid 1488258 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1488258 00:07:24.545 17:23:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1488258 00:07:24.805 17:23:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1488432 ]] 00:07:24.805 17:23:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1488432 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1488432 ']' 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1488432 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488432 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488432' 00:07:24.805 killing process with pid 1488432 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1488432 00:07:24.805 17:23:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1488432 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1488258 ]] 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1488258 00:07:25.066 17:23:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1488258 ']' 00:07:25.066 17:23:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1488258 00:07:25.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1488258) - No such process 00:07:25.066 17:23:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1488258 is not found' 00:07:25.066 Process with pid 1488258 is not found 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1488432 ]] 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1488432 00:07:25.066 17:23:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1488432 ']' 00:07:25.066 17:23:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1488432 00:07:25.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1488432) - No such process 00:07:25.066 17:23:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1488432 is not found' 00:07:25.066 Process with pid 1488432 is not found 00:07:25.066 17:23:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.066 00:07:25.066 real 0m15.244s 00:07:25.066 user 0m27.272s 00:07:25.067 sys 0m4.737s 00:07:25.067 17:23:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.067 17:23:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.067 ************************************ 00:07:25.067 END TEST cpu_locks 00:07:25.067 ************************************ 00:07:25.067 00:07:25.067 real 0m41.143s 00:07:25.067 user 1m21.658s 00:07:25.067 sys 0m8.097s 00:07:25.067 17:23:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.067 17:23:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.067 ************************************ 00:07:25.067 END TEST event 00:07:25.067 ************************************ 00:07:25.067 17:23:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.067 17:23:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.067 17:23:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.067 17:23:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.067 ************************************ 00:07:25.067 START TEST thread 00:07:25.067 ************************************ 00:07:25.067 17:23:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.328 * Looking for test storage... 00:07:25.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.328 17:23:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.328 17:23:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.328 17:23:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.328 17:23:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.328 17:23:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.328 17:23:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.328 17:23:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.328 17:23:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.328 17:23:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.328 17:23:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.328 17:23:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.328 17:23:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:25.328 17:23:17 thread -- scripts/common.sh@345 -- # : 1 00:07:25.328 17:23:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.328 17:23:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.328 17:23:17 thread -- scripts/common.sh@365 -- # decimal 1 00:07:25.328 17:23:17 thread -- scripts/common.sh@353 -- # local d=1 00:07:25.328 17:23:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.328 17:23:17 thread -- scripts/common.sh@355 -- # echo 1 00:07:25.328 17:23:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.328 17:23:17 thread -- scripts/common.sh@366 -- # decimal 2 00:07:25.328 17:23:17 thread -- scripts/common.sh@353 -- # local d=2 00:07:25.328 17:23:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.328 17:23:17 thread -- scripts/common.sh@355 -- # echo 2 00:07:25.328 17:23:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.328 17:23:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.328 17:23:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.328 17:23:17 thread -- scripts/common.sh@368 -- # return 0 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.328 --rc genhtml_branch_coverage=1 00:07:25.328 --rc genhtml_function_coverage=1 00:07:25.328 --rc genhtml_legend=1 00:07:25.328 --rc geninfo_all_blocks=1 00:07:25.328 --rc geninfo_unexecuted_blocks=1 00:07:25.328 00:07:25.328 ' 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.328 --rc genhtml_branch_coverage=1 00:07:25.328 --rc genhtml_function_coverage=1 00:07:25.328 --rc genhtml_legend=1 00:07:25.328 --rc geninfo_all_blocks=1 00:07:25.328 --rc geninfo_unexecuted_blocks=1 00:07:25.328 00:07:25.328 ' 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.328 --rc genhtml_branch_coverage=1 00:07:25.328 --rc genhtml_function_coverage=1 00:07:25.328 --rc genhtml_legend=1 00:07:25.328 --rc geninfo_all_blocks=1 00:07:25.328 --rc geninfo_unexecuted_blocks=1 00:07:25.328 00:07:25.328 ' 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.328 --rc genhtml_branch_coverage=1 00:07:25.328 --rc genhtml_function_coverage=1 00:07:25.328 --rc genhtml_legend=1 00:07:25.328 --rc geninfo_all_blocks=1 00:07:25.328 --rc geninfo_unexecuted_blocks=1 00:07:25.328 00:07:25.328 ' 00:07:25.328 17:23:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.328 17:23:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.328 ************************************ 00:07:25.328 START TEST thread_poller_perf 00:07:25.328 ************************************ 00:07:25.328 17:23:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.328 [2024-12-06 17:23:17.363091] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:25.328 [2024-12-06 17:23:17.363209] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488890 ] 00:07:25.589 [2024-12-06 17:23:17.451518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.589 [2024-12-06 17:23:17.491071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.589 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:26.532 [2024-12-06T16:23:18.598Z] ====================================== 00:07:26.532 [2024-12-06T16:23:18.598Z] busy:2405940254 (cyc) 00:07:26.532 [2024-12-06T16:23:18.598Z] total_run_count: 419000 00:07:26.532 [2024-12-06T16:23:18.598Z] tsc_hz: 2400000000 (cyc) 00:07:26.532 [2024-12-06T16:23:18.598Z] ====================================== 00:07:26.532 [2024-12-06T16:23:18.598Z] poller_cost: 5742 (cyc), 2392 (nsec) 00:07:26.532 00:07:26.532 real 0m1.183s 00:07:26.532 user 0m1.095s 00:07:26.532 sys 0m0.083s 00:07:26.532 17:23:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.532 17:23:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.532 ************************************ 00:07:26.532 END TEST thread_poller_perf 00:07:26.532 ************************************ 00:07:26.532 17:23:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.532 17:23:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:26.532 17:23:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.532 17:23:18 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.793 ************************************ 00:07:26.793 START TEST thread_poller_perf 00:07:26.793 ************************************ 00:07:26.793 17:23:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.793 [2024-12-06 17:23:18.625335] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:26.793 [2024-12-06 17:23:18.625432] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489236 ] 00:07:26.793 [2024-12-06 17:23:18.713178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.793 [2024-12-06 17:23:18.742843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.793 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.736 [2024-12-06T16:23:19.802Z] ====================================== 00:07:27.736 [2024-12-06T16:23:19.802Z] busy:2401385462 (cyc) 00:07:27.736 [2024-12-06T16:23:19.802Z] total_run_count: 5104000 00:07:27.736 [2024-12-06T16:23:19.802Z] tsc_hz: 2400000000 (cyc) 00:07:27.736 [2024-12-06T16:23:19.802Z] ====================================== 00:07:27.736 [2024-12-06T16:23:19.802Z] poller_cost: 470 (cyc), 195 (nsec) 00:07:27.736 00:07:27.736 real 0m1.167s 00:07:27.736 user 0m1.082s 00:07:27.736 sys 0m0.081s 00:07:27.736 17:23:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.736 17:23:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.736 ************************************ 00:07:27.736 END TEST thread_poller_perf 00:07:27.736 ************************************ 00:07:27.996 17:23:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.996 00:07:27.996 real 0m2.711s 00:07:27.996 user 0m2.364s 00:07:27.996 sys 0m0.358s 00:07:27.996 17:23:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.996 17:23:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 ************************************ 00:07:27.996 END TEST thread 00:07:27.996 ************************************ 00:07:27.996 17:23:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:27.996 17:23:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.996 17:23:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.996 17:23:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.996 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 ************************************ 00:07:27.996 START TEST app_cmdline 00:07:27.996 ************************************ 00:07:27.996 17:23:19 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.996 * Looking for test storage... 00:07:27.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.996 17:23:19 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.996 17:23:19 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.996 17:23:19 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.996 17:23:20 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.996 17:23:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.996 17:23:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.996 17:23:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.257 17:23:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.257 --rc genhtml_branch_coverage=1 00:07:28.257 --rc genhtml_function_coverage=1 00:07:28.257 --rc genhtml_legend=1 00:07:28.257 --rc geninfo_all_blocks=1 00:07:28.257 --rc geninfo_unexecuted_blocks=1 00:07:28.257 00:07:28.257 ' 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.257 --rc genhtml_branch_coverage=1 00:07:28.257 --rc genhtml_function_coverage=1 00:07:28.257 --rc genhtml_legend=1 00:07:28.257 --rc geninfo_all_blocks=1 00:07:28.257 --rc geninfo_unexecuted_blocks=1 00:07:28.257 00:07:28.257 ' 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.257 --rc genhtml_branch_coverage=1 00:07:28.257 --rc genhtml_function_coverage=1 00:07:28.257 --rc genhtml_legend=1 00:07:28.257 --rc geninfo_all_blocks=1 00:07:28.257 --rc geninfo_unexecuted_blocks=1 00:07:28.257 00:07:28.257 ' 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.257 --rc genhtml_branch_coverage=1 00:07:28.257 --rc genhtml_function_coverage=1 00:07:28.257 --rc genhtml_legend=1 00:07:28.257 --rc geninfo_all_blocks=1 00:07:28.257 --rc geninfo_unexecuted_blocks=1 00:07:28.257 00:07:28.257 ' 00:07:28.257 17:23:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.257 17:23:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1489631 00:07:28.257 17:23:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1489631 00:07:28.257 17:23:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1489631 ']' 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.257 17:23:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.257 [2024-12-06 17:23:20.151092] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:28.257 [2024-12-06 17:23:20.151173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489631 ] 00:07:28.257 [2024-12-06 17:23:20.238367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.257 [2024-12-06 17:23:20.273428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.195 17:23:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.195 17:23:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:29.195 17:23:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:29.195 { 00:07:29.195 "version": "SPDK v25.01-pre git sha1 99034762d", 00:07:29.195 "fields": { 00:07:29.195 "major": 25, 00:07:29.195 "minor": 1, 00:07:29.195 "patch": 0, 00:07:29.195 "suffix": "-pre", 00:07:29.195 "commit": "99034762d" 00:07:29.195 } 00:07:29.195 } 00:07:29.195 17:23:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:29.195 17:23:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:29.195 17:23:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:29.196 17:23:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.196 17:23:21 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.457 request: 00:07:29.457 { 00:07:29.457 "method": "env_dpdk_get_mem_stats", 00:07:29.457 "req_id": 1 00:07:29.457 } 00:07:29.457 Got JSON-RPC error response 00:07:29.457 response: 00:07:29.457 { 00:07:29.457 "code": -32601, 00:07:29.457 "message": "Method not found" 00:07:29.457 } 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.457 17:23:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1489631 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1489631 ']' 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1489631 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489631 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489631' 00:07:29.457 killing process with pid 1489631 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 1489631 00:07:29.457 17:23:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 1489631 00:07:29.775 00:07:29.775 real 0m1.673s 00:07:29.776 user 0m2.007s 00:07:29.776 sys 0m0.441s 00:07:29.776 17:23:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.776 17:23:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.776 ************************************ 00:07:29.776 END TEST app_cmdline 00:07:29.776 ************************************ 00:07:29.776 17:23:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.776 17:23:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.776 17:23:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.776 17:23:21 -- common/autotest_common.sh@10 -- # set +x 00:07:29.776 ************************************ 00:07:29.776 START TEST version 00:07:29.776 ************************************ 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.776 * Looking for test storage... 00:07:29.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.776 17:23:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.776 17:23:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.776 17:23:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.776 17:23:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.776 17:23:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.776 17:23:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.776 17:23:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.776 17:23:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.776 17:23:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.776 17:23:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.776 17:23:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.776 17:23:21 version -- scripts/common.sh@344 -- # case "$op" in 00:07:29.776 17:23:21 version -- scripts/common.sh@345 -- # : 1 00:07:29.776 17:23:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.776 17:23:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.776 17:23:21 version -- scripts/common.sh@365 -- # decimal 1 00:07:29.776 17:23:21 version -- scripts/common.sh@353 -- # local d=1 00:07:29.776 17:23:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.776 17:23:21 version -- scripts/common.sh@355 -- # echo 1 00:07:29.776 17:23:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.776 17:23:21 version -- scripts/common.sh@366 -- # decimal 2 00:07:29.776 17:23:21 version -- scripts/common.sh@353 -- # local d=2 00:07:29.776 17:23:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.776 17:23:21 version -- scripts/common.sh@355 -- # echo 2 00:07:29.776 17:23:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.776 17:23:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.776 17:23:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.776 17:23:21 version -- scripts/common.sh@368 -- # return 0 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.776 --rc genhtml_branch_coverage=1 00:07:29.776 --rc genhtml_function_coverage=1 00:07:29.776 --rc genhtml_legend=1 00:07:29.776 --rc geninfo_all_blocks=1 00:07:29.776 --rc geninfo_unexecuted_blocks=1 00:07:29.776 00:07:29.776 ' 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.776 --rc genhtml_branch_coverage=1 00:07:29.776 --rc genhtml_function_coverage=1 00:07:29.776 --rc genhtml_legend=1 00:07:29.776 --rc geninfo_all_blocks=1 00:07:29.776 --rc geninfo_unexecuted_blocks=1 00:07:29.776 00:07:29.776 ' 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.776 --rc genhtml_branch_coverage=1 00:07:29.776 --rc genhtml_function_coverage=1 00:07:29.776 --rc genhtml_legend=1 00:07:29.776 --rc geninfo_all_blocks=1 00:07:29.776 --rc geninfo_unexecuted_blocks=1 00:07:29.776 00:07:29.776 ' 00:07:29.776 17:23:21 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.776 --rc genhtml_branch_coverage=1 00:07:29.776 --rc genhtml_function_coverage=1 00:07:29.776 --rc genhtml_legend=1 00:07:29.776 --rc geninfo_all_blocks=1 00:07:29.776 --rc geninfo_unexecuted_blocks=1 00:07:29.776 00:07:29.776 ' 00:07:29.776 17:23:21 version -- app/version.sh@17 -- # get_header_version major 00:07:29.776 17:23:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.776 17:23:21 version -- app/version.sh@14 -- # cut -f2 00:07:29.776 17:23:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.776 17:23:21 version -- app/version.sh@17 -- # major=25 00:07:30.035 17:23:21 version -- app/version.sh@18 -- # get_header_version minor 00:07:30.035 17:23:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.035 17:23:21 version -- app/version.sh@14 -- # cut -f2 00:07:30.035 17:23:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.035 17:23:21 version -- app/version.sh@18 -- # minor=1 00:07:30.035 17:23:21 version -- app/version.sh@19 -- # get_header_version patch 00:07:30.035 17:23:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.035 17:23:21 version -- app/version.sh@14 -- # cut -f2 00:07:30.035 17:23:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.035 17:23:21 version -- app/version.sh@19 -- # patch=0 00:07:30.035 17:23:21 version -- app/version.sh@20 -- # get_header_version suffix 00:07:30.035 17:23:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.035 17:23:21 version -- app/version.sh@14 -- # cut -f2 00:07:30.035 17:23:21 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.035 17:23:21 version -- app/version.sh@20 -- # suffix=-pre 00:07:30.035 17:23:21 version -- app/version.sh@22 -- # version=25.1 00:07:30.035 17:23:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:30.035 17:23:21 version -- app/version.sh@28 -- # version=25.1rc0 00:07:30.035 17:23:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:30.035 17:23:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:30.035 17:23:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:30.035 17:23:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:30.035 00:07:30.035 real 0m0.277s 00:07:30.035 user 0m0.162s 00:07:30.035 sys 0m0.164s 00:07:30.035 17:23:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.035 17:23:21 version -- common/autotest_common.sh@10 -- # set +x 00:07:30.035 ************************************ 00:07:30.035 END TEST version 00:07:30.035 ************************************ 00:07:30.036 17:23:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:30.036 17:23:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:30.036 17:23:21 -- spdk/autotest.sh@194 -- # uname -s 00:07:30.036 17:23:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:30.036 17:23:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:30.036 17:23:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:30.036 17:23:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:30.036 17:23:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:30.036 17:23:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:30.036 17:23:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.036 17:23:21 -- common/autotest_common.sh@10 -- # set +x 00:07:30.036 17:23:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:30.036 17:23:22 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:30.036 17:23:22 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:30.036 17:23:22 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:30.036 17:23:22 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:30.036 17:23:22 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:30.036 17:23:22 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.036 17:23:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.036 17:23:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.036 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:07:30.036 ************************************ 00:07:30.036 START TEST nvmf_tcp 00:07:30.036 ************************************ 00:07:30.036 17:23:22 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.294 * Looking for test storage... 00:07:30.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.294 17:23:22 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.294 17:23:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.294 17:23:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.294 17:23:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.294 17:23:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.294 17:23:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.294 17:23:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.294 17:23:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.294 17:23:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.294 17:23:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.295 17:23:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.295 --rc genhtml_branch_coverage=1 00:07:30.295 --rc genhtml_function_coverage=1 00:07:30.295 --rc genhtml_legend=1 00:07:30.295 --rc geninfo_all_blocks=1 00:07:30.295 --rc geninfo_unexecuted_blocks=1 00:07:30.295 00:07:30.295 ' 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.295 --rc genhtml_branch_coverage=1 00:07:30.295 --rc genhtml_function_coverage=1 00:07:30.295 --rc genhtml_legend=1 00:07:30.295 --rc geninfo_all_blocks=1 00:07:30.295 --rc geninfo_unexecuted_blocks=1 00:07:30.295 00:07:30.295 ' 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.295 --rc genhtml_branch_coverage=1 00:07:30.295 --rc genhtml_function_coverage=1 00:07:30.295 --rc genhtml_legend=1 00:07:30.295 --rc geninfo_all_blocks=1 00:07:30.295 --rc geninfo_unexecuted_blocks=1 00:07:30.295 00:07:30.295 ' 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.295 --rc genhtml_branch_coverage=1 00:07:30.295 --rc genhtml_function_coverage=1 00:07:30.295 --rc genhtml_legend=1 00:07:30.295 --rc geninfo_all_blocks=1 00:07:30.295 --rc geninfo_unexecuted_blocks=1 00:07:30.295 00:07:30.295 ' 00:07:30.295 17:23:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.295 17:23:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.295 17:23:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.295 17:23:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.295 ************************************ 00:07:30.295 START TEST nvmf_target_core 00:07:30.295 ************************************ 00:07:30.295 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.556 * Looking for test storage... 00:07:30.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.556 --rc genhtml_branch_coverage=1 00:07:30.556 --rc genhtml_function_coverage=1 00:07:30.556 --rc genhtml_legend=1 00:07:30.556 --rc geninfo_all_blocks=1 00:07:30.556 --rc geninfo_unexecuted_blocks=1 00:07:30.556 00:07:30.556 ' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.556 --rc genhtml_branch_coverage=1 00:07:30.556 --rc genhtml_function_coverage=1 00:07:30.556 --rc genhtml_legend=1 00:07:30.556 --rc geninfo_all_blocks=1 00:07:30.556 --rc geninfo_unexecuted_blocks=1 00:07:30.556 00:07:30.556 ' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.556 --rc genhtml_branch_coverage=1 00:07:30.556 --rc genhtml_function_coverage=1 00:07:30.556 --rc genhtml_legend=1 00:07:30.556 --rc geninfo_all_blocks=1 00:07:30.556 --rc geninfo_unexecuted_blocks=1 00:07:30.556 00:07:30.556 ' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.556 --rc genhtml_branch_coverage=1 00:07:30.556 --rc genhtml_function_coverage=1 00:07:30.556 --rc genhtml_legend=1 00:07:30.556 --rc geninfo_all_blocks=1 00:07:30.556 --rc geninfo_unexecuted_blocks=1 00:07:30.556 00:07:30.556 ' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.556 ************************************ 00:07:30.556 START TEST nvmf_abort 00:07:30.556 ************************************ 00:07:30.556 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.817 * Looking for test storage... 00:07:30.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:30.817 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.818 --rc genhtml_branch_coverage=1 00:07:30.818 --rc genhtml_function_coverage=1 00:07:30.818 --rc genhtml_legend=1 00:07:30.818 --rc geninfo_all_blocks=1 00:07:30.818 --rc geninfo_unexecuted_blocks=1 00:07:30.818 00:07:30.818 ' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.818 --rc genhtml_branch_coverage=1 00:07:30.818 --rc genhtml_function_coverage=1 00:07:30.818 --rc genhtml_legend=1 00:07:30.818 --rc geninfo_all_blocks=1 00:07:30.818 --rc geninfo_unexecuted_blocks=1 00:07:30.818 00:07:30.818 ' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.818 --rc genhtml_branch_coverage=1 00:07:30.818 --rc genhtml_function_coverage=1 00:07:30.818 --rc genhtml_legend=1 00:07:30.818 --rc geninfo_all_blocks=1 00:07:30.818 --rc geninfo_unexecuted_blocks=1 00:07:30.818 00:07:30.818 ' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.818 --rc genhtml_branch_coverage=1 00:07:30.818 --rc genhtml_function_coverage=1 00:07:30.818 --rc genhtml_legend=1 00:07:30.818 --rc geninfo_all_blocks=1 00:07:30.818 --rc geninfo_unexecuted_blocks=1 00:07:30.818 00:07:30.818 ' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:30.818 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.819 17:23:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:38.958 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:38.958 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:38.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:38.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.958 17:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.958 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:07:38.958 00:07:38.959 --- 10.0.0.2 ping statistics --- 00:07:38.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.959 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:38.959 00:07:38.959 --- 10.0.0.1 ping statistics --- 00:07:38.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.959 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1494122 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1494122 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1494122 ']' 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.959 17:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.959 [2024-12-06 17:23:30.332958] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:38.959 [2024-12-06 17:23:30.333023] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.959 [2024-12-06 17:23:30.433134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.959 [2024-12-06 17:23:30.487054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.959 [2024-12-06 17:23:30.487115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.959 [2024-12-06 17:23:30.487123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.959 [2024-12-06 17:23:30.487131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.959 [2024-12-06 17:23:30.487137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.959 [2024-12-06 17:23:30.489221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.959 [2024-12-06 17:23:30.489384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.959 [2024-12-06 17:23:30.489385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.220 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.220 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:39.220 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.220 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.221 [2024-12-06 17:23:31.205858] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.221 Malloc0 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.221 Delay0 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.221 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.483 [2024-12-06 17:23:31.294927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.483 17:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:39.483 [2024-12-06 17:23:31.486832] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:42.029 Initializing NVMe Controllers 00:07:42.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.029 controller IO queue size 128 less than required 00:07:42.029 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:42.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:42.029 Initialization complete. Launching workers. 00:07:42.029 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28584 00:07:42.029 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28645, failed to submit 62 00:07:42.029 success 28588, unsuccessful 57, failed 0 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.029 rmmod nvme_tcp 00:07:42.029 rmmod nvme_fabrics 00:07:42.029 rmmod nvme_keyring 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1494122 ']' 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1494122 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1494122 ']' 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1494122 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494122 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494122' 00:07:42.029 killing process with pid 1494122 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1494122 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1494122 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.029 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.030 17:23:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.943 00:07:43.943 real 0m13.313s 00:07:43.943 user 0m13.909s 00:07:43.943 sys 0m6.567s 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.943 ************************************ 00:07:43.943 END TEST nvmf_abort 00:07:43.943 ************************************ 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.943 ************************************ 00:07:43.943 START TEST nvmf_ns_hotplug_stress 00:07:43.943 ************************************ 00:07:43.943 17:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:44.205 * Looking for test storage... 00:07:44.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.205 --rc genhtml_branch_coverage=1 00:07:44.205 --rc genhtml_function_coverage=1 00:07:44.205 --rc genhtml_legend=1 00:07:44.205 --rc geninfo_all_blocks=1 00:07:44.205 --rc geninfo_unexecuted_blocks=1 00:07:44.205 00:07:44.205 ' 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.205 --rc genhtml_branch_coverage=1 00:07:44.205 --rc genhtml_function_coverage=1 00:07:44.205 --rc genhtml_legend=1 00:07:44.205 --rc geninfo_all_blocks=1 00:07:44.205 --rc geninfo_unexecuted_blocks=1 00:07:44.205 00:07:44.205 ' 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.205 --rc genhtml_branch_coverage=1 00:07:44.205 --rc genhtml_function_coverage=1 00:07:44.205 --rc genhtml_legend=1 00:07:44.205 --rc geninfo_all_blocks=1 00:07:44.205 --rc geninfo_unexecuted_blocks=1 00:07:44.205 00:07:44.205 ' 00:07:44.205 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.205 --rc genhtml_branch_coverage=1 00:07:44.205 --rc genhtml_function_coverage=1 00:07:44.205 --rc genhtml_legend=1 00:07:44.205 --rc geninfo_all_blocks=1 00:07:44.206 --rc geninfo_unexecuted_blocks=1 00:07:44.206 00:07:44.206 ' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.206 17:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:52.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:52.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:52.346 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:52.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.346 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:07:52.347 00:07:52.347 --- 10.0.0.2 ping statistics --- 00:07:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.347 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:07:52.347 00:07:52.347 --- 10.0.0.1 ping statistics --- 00:07:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.347 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1498923 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1498923 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1498923 ']' 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.347 17:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.347 [2024-12-06 17:23:43.769026] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:07:52.347 [2024-12-06 17:23:43.769093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.347 [2024-12-06 17:23:43.868817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.347 [2024-12-06 17:23:43.921028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.347 [2024-12-06 17:23:43.921082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.347 [2024-12-06 17:23:43.921090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.347 [2024-12-06 17:23:43.921097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.347 [2024-12-06 17:23:43.921104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.347 [2024-12-06 17:23:43.922986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.347 [2024-12-06 17:23:43.923152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.347 [2024-12-06 17:23:43.923152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:52.608 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:52.869 [2024-12-06 17:23:44.806770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.869 17:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.130 17:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.390 [2024-12-06 17:23:45.201807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.390 17:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.390 17:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:53.650 Malloc0 00:07:53.650 17:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.911 Delay0 00:07:53.911 17:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.170 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:54.170 NULL1 00:07:54.170 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:54.429 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1499546 00:07:54.429 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:54.429 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:54.429 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.689 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.689 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:54.689 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:54.950 true 00:07:54.950 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:54.950 17:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.212 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.472 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:55.472 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:55.472 true 00:07:55.472 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:55.472 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.732 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.993 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:55.993 17:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:55.993 true 00:07:55.993 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:55.993 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.254 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.516 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:56.516 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:56.516 true 00:07:56.777 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:56.777 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.777 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.037 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:57.037 17:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:57.297 true 00:07:57.297 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:57.297 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.297 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.558 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:57.558 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:57.819 true 00:07:57.819 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:57.819 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.819 17:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.080 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:58.080 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:58.340 true 00:07:58.340 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:58.340 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.601 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.601 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:58.601 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:58.861 true 00:07:58.861 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:58.861 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.121 17:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.121 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:59.121 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:59.382 true 00:07:59.382 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:59.382 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.642 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.903 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:59.903 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:59.903 true 00:07:59.903 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:07:59.903 17:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.162 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.422 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:00.422 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:00.422 true 00:08:00.422 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:00.422 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.683 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.944 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:00.944 17:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:00.944 true 00:08:01.204 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:01.204 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.204 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.464 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:01.464 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:01.724 true 00:08:01.724 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:01.724 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.724 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.985 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:01.985 17:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:02.247 true 00:08:02.247 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:02.247 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.247 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.508 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:02.508 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:02.768 true 00:08:02.768 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:02.768 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.768 17:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.029 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:03.029 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:03.289 true 00:08:03.289 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:03.289 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.549 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.549 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:03.549 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:03.809 true 00:08:03.809 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:03.809 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.070 17:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.070 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:04.070 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:04.331 true 00:08:04.331 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:04.331 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.591 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.591 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:04.591 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:04.851 true 00:08:04.851 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:04.851 17:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.111 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.371 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:05.371 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:05.371 true 00:08:05.371 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:05.371 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.631 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.892 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:05.892 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:05.892 true 00:08:05.892 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:05.892 17:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.151 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.459 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:06.459 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:06.459 true 00:08:06.459 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:06.459 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.749 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.010 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:07.010 17:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:07.010 true 00:08:07.010 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:07.010 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.271 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.531 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:07.531 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:07.531 true 00:08:07.792 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:07.792 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.792 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.053 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:08.053 17:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:08.314 true 00:08:08.314 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:08.314 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.314 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.575 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:08.575 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:08.836 true 00:08:08.836 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:08.836 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.096 17:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.096 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:09.096 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:09.357 true 00:08:09.357 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:09.357 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.617 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.617 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:09.617 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:09.877 true 00:08:09.877 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:09.877 17:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.137 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.398 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:10.398 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:10.398 true 00:08:10.398 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:10.398 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.660 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.920 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:10.920 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:10.920 true 00:08:10.920 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:10.920 17:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.181 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.442 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:11.442 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:11.442 true 00:08:11.703 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:11.703 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.703 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.964 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:11.964 17:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:12.225 true 00:08:12.225 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:12.225 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.226 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.486 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:12.486 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:12.746 true 00:08:12.746 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:12.746 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.008 17:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.008 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:13.008 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:13.269 true 00:08:13.269 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:13.269 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.529 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.529 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:13.529 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:13.790 true 00:08:13.790 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:13.790 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.050 17:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.311 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:14.311 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:14.311 true 00:08:14.311 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:14.311 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.572 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.832 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:14.832 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:14.832 true 00:08:14.832 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:14.832 17:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.094 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.355 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:15.355 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:15.355 true 00:08:15.616 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:15.616 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.616 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.878 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:15.878 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:16.140 true 00:08:16.140 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:16.140 17:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.140 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.401 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:16.401 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:16.666 true 00:08:16.666 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:16.666 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.666 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.927 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:16.927 17:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:17.187 true 00:08:17.187 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:17.187 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.447 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.447 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:17.448 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:17.708 true 00:08:17.708 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:17.708 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.970 17:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.970 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:17.970 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:18.231 true 00:08:18.231 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:18.231 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.492 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.752 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:18.752 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:18.752 true 00:08:18.752 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:18.752 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.013 17:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.274 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:19.274 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:19.274 true 00:08:19.274 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:19.274 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.535 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.795 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:19.796 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:19.796 true 00:08:19.796 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:19.796 17:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.056 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.317 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:20.317 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:20.317 true 00:08:20.579 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:20.579 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.579 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.840 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:20.840 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:20.840 true 00:08:21.101 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:21.101 17:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.101 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.362 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:21.362 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:21.622 true 00:08:21.622 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:21.622 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.622 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.882 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:21.882 17:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:22.141 true 00:08:22.141 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:22.141 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.401 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.401 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:22.401 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:22.660 true 00:08:22.660 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:22.660 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.919 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.919 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:22.919 17:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:23.178 true 00:08:23.178 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:23.178 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.437 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.697 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:23.697 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:23.697 true 00:08:23.697 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:23.697 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.957 17:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.216 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:24.216 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:24.216 true 00:08:24.216 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:24.216 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.476 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.736 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:08:24.736 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:08:24.736 Initializing NVMe Controllers 00:08:24.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:24.736 Controller IO queue size 128, less than required. 00:08:24.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:24.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:24.736 Initialization complete. Launching workers. 00:08:24.736 ======================================================== 00:08:24.736 Latency(us) 00:08:24.736 Device Information : IOPS MiB/s Average min max 00:08:24.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31043.80 15.16 4123.23 1069.49 11163.43 00:08:24.736 ======================================================== 00:08:24.736 Total : 31043.80 15.16 4123.23 1069.49 11163.43 00:08:24.736 00:08:24.995 true 00:08:24.995 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1499546 00:08:24.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1499546) - No such process 00:08:24.995 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1499546 00:08:24.995 17:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.995 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.254 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:25.254 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:25.254 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:25.254 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:25.254 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:25.515 null0 00:08:25.515 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:25.515 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:25.515 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:25.515 null1 00:08:25.515 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:25.515 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:25.515 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:25.776 null2 00:08:25.776 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:25.776 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:25.776 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:26.037 null3 00:08:26.037 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.037 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.037 17:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:26.037 null4 00:08:26.037 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.037 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.037 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:26.298 null5 00:08:26.298 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.298 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.298 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:26.559 null6 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:26.559 null7 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:26.559 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1506665 1506666 1506668 1506670 1506672 1506674 1506676 1506677 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.821 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.082 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.082 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.082 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.343 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.605 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.866 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.866 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.867 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.127 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.127 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.127 17:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.127 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.389 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:28.651 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.912 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.913 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.173 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.173 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.173 17:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.173 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.434 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.435 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.696 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.957 17:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.217 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.218 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.218 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.218 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.218 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.478 rmmod nvme_tcp 00:08:30.478 rmmod nvme_fabrics 00:08:30.478 rmmod nvme_keyring 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1498923 ']' 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1498923 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1498923 ']' 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1498923 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.478 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498923 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498923' 00:08:30.739 killing process with pid 1498923 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1498923 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1498923 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.739 17:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.293 00:08:33.293 real 0m48.790s 00:08:33.293 user 3m18.549s 00:08:33.293 sys 0m17.462s 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 ************************************ 00:08:33.293 END TEST nvmf_ns_hotplug_stress 00:08:33.293 ************************************ 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 ************************************ 00:08:33.293 START TEST nvmf_delete_subsystem 00:08:33.293 ************************************ 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:33.293 * Looking for test storage... 00:08:33.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.293 17:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.293 --rc genhtml_branch_coverage=1 00:08:33.293 --rc genhtml_function_coverage=1 00:08:33.293 --rc genhtml_legend=1 00:08:33.293 --rc geninfo_all_blocks=1 00:08:33.293 --rc geninfo_unexecuted_blocks=1 00:08:33.293 00:08:33.293 ' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.293 --rc genhtml_branch_coverage=1 00:08:33.293 --rc genhtml_function_coverage=1 00:08:33.293 --rc genhtml_legend=1 00:08:33.293 --rc geninfo_all_blocks=1 00:08:33.293 --rc geninfo_unexecuted_blocks=1 00:08:33.293 00:08:33.293 ' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.293 --rc genhtml_branch_coverage=1 00:08:33.293 --rc genhtml_function_coverage=1 00:08:33.293 --rc genhtml_legend=1 00:08:33.293 --rc geninfo_all_blocks=1 00:08:33.293 --rc geninfo_unexecuted_blocks=1 00:08:33.293 00:08:33.293 ' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.293 --rc genhtml_branch_coverage=1 00:08:33.293 --rc genhtml_function_coverage=1 00:08:33.293 --rc genhtml_legend=1 00:08:33.293 --rc geninfo_all_blocks=1 00:08:33.293 --rc geninfo_unexecuted_blocks=1 00:08:33.293 00:08:33.293 ' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.293 17:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.438 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:41.439 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:41.439 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:41.439 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:41.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:08:41.439 00:08:41.439 --- 10.0.0.2 ping statistics --- 00:08:41.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.439 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:08:41.439 00:08:41.439 --- 10.0.0.1 ping statistics --- 00:08:41.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.439 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1511868 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1511868 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1511868 ']' 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.439 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.440 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.440 17:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 [2024-12-06 17:24:32.643634] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:08:41.440 [2024-12-06 17:24:32.643706] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.440 [2024-12-06 17:24:32.736281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.440 [2024-12-06 17:24:32.788005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.440 [2024-12-06 17:24:32.788065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.440 [2024-12-06 17:24:32.788074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.440 [2024-12-06 17:24:32.788081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.440 [2024-12-06 17:24:32.788087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.440 [2024-12-06 17:24:32.789694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.440 [2024-12-06 17:24:32.789756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.440 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.440 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:41.440 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.440 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:41.440 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 [2024-12-06 17:24:33.514545] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 [2024-12-06 17:24:33.538872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 NULL1 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 Delay0 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1512212 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:41.701 17:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:41.701 [2024-12-06 17:24:33.665822] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:43.610 17:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.610 17:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.610 17:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 [2024-12-06 17:24:35.913264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13732c0 is same with the state(6) to be set 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 Write completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 Read completed with error (sct=0, sc=8) 00:08:43.870 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Write completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 starting I/O failed: -6 00:08:43.871 Read completed with error (sct=0, sc=8) 00:08:43.871 [2024-12-06 17:24:35.916904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6c40000c40 is same with the state(6) to be set 00:08:43.871 starting I/O failed: -6 00:08:43.871 starting I/O failed: -6 00:08:43.871 starting I/O failed: -6 00:08:43.871 starting I/O failed: -6 00:08:43.871 starting I/O failed: -6 00:08:45.256 [2024-12-06 17:24:36.888802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13749b0 is same with the state(6) to be set 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 [2024-12-06 17:24:36.916804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13734a0 is same with the state(6) to be set 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 [2024-12-06 17:24:36.917417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1373860 is same with the state(6) to be set 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Write completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 [2024-12-06 17:24:36.919009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6c4000d020 is same with the state(6) to be set 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.256 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Write completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Write completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Write completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Write completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Write completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Write completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 Read completed with error (sct=0, sc=8) 00:08:45.257 [2024-12-06 17:24:36.919152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6c4000d7c0 is same with the state(6) to be set 00:08:45.257 Initializing NVMe Controllers 00:08:45.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.257 Controller IO queue size 128, less than required. 00:08:45.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:45.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:45.257 Initialization complete. Launching workers. 00:08:45.257 ======================================================== 00:08:45.257 Latency(us) 00:08:45.257 Device Information : IOPS MiB/s Average min max 00:08:45.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.14 0.09 880054.33 363.16 1007668.39 00:08:45.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.62 0.09 938402.29 415.57 2002377.42 00:08:45.257 ======================================================== 00:08:45.257 Total : 358.76 0.18 909592.48 363.16 2002377.42 00:08:45.257 00:08:45.257 [2024-12-06 17:24:36.919665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13749b0 (9): Bad file descriptor 00:08:45.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:45.257 17:24:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.257 17:24:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:45.257 17:24:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1512212 00:08:45.257 17:24:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1512212 00:08:45.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1512212) - No such process 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1512212 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1512212 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1512212 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.518 [2024-12-06 17:24:37.449728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1512898 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:45.518 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.518 [2024-12-06 17:24:37.557469] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:46.089 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.089 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:46.089 17:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.687 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.687 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:46.687 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.947 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.947 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:46.947 17:24:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.518 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.518 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:47.518 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.090 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.090 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:48.090 17:24:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:48.683 17:24:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.944 Initializing NVMe Controllers 00:08:48.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:48.944 Controller IO queue size 128, less than required. 00:08:48.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:48.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:48.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:48.944 Initialization complete. Launching workers. 00:08:48.944 ======================================================== 00:08:48.944 Latency(us) 00:08:48.944 Device Information : IOPS MiB/s Average min max 00:08:48.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001984.59 1000201.30 1006471.12 00:08:48.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002899.71 1000338.20 1008573.37 00:08:48.944 ======================================================== 00:08:48.944 Total : 256.00 0.12 1002442.15 1000201.30 1008573.37 00:08:48.944 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1512898 00:08:48.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1512898) - No such process 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1512898 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.944 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.205 rmmod nvme_tcp 00:08:49.205 rmmod nvme_fabrics 00:08:49.205 rmmod nvme_keyring 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1511868 ']' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1511868 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1511868 ']' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1511868 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1511868 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1511868' 00:08:49.205 killing process with pid 1511868 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1511868 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1511868 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.205 17:24:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.846 00:08:51.846 real 0m18.491s 00:08:51.846 user 0m31.148s 00:08:51.846 sys 0m6.920s 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.846 ************************************ 00:08:51.846 END TEST nvmf_delete_subsystem 00:08:51.846 ************************************ 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.846 ************************************ 00:08:51.846 START TEST nvmf_host_management 00:08:51.846 ************************************ 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.846 * Looking for test storage... 00:08:51.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:51.846 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.847 --rc genhtml_branch_coverage=1 00:08:51.847 --rc genhtml_function_coverage=1 00:08:51.847 --rc genhtml_legend=1 00:08:51.847 --rc geninfo_all_blocks=1 00:08:51.847 --rc geninfo_unexecuted_blocks=1 00:08:51.847 00:08:51.847 ' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.847 --rc genhtml_branch_coverage=1 00:08:51.847 --rc genhtml_function_coverage=1 00:08:51.847 --rc genhtml_legend=1 00:08:51.847 --rc geninfo_all_blocks=1 00:08:51.847 --rc geninfo_unexecuted_blocks=1 00:08:51.847 00:08:51.847 ' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.847 --rc genhtml_branch_coverage=1 00:08:51.847 --rc genhtml_function_coverage=1 00:08:51.847 --rc genhtml_legend=1 00:08:51.847 --rc geninfo_all_blocks=1 00:08:51.847 --rc geninfo_unexecuted_blocks=1 00:08:51.847 00:08:51.847 ' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.847 --rc genhtml_branch_coverage=1 00:08:51.847 --rc genhtml_function_coverage=1 00:08:51.847 --rc genhtml_legend=1 00:08:51.847 --rc geninfo_all_blocks=1 00:08:51.847 --rc geninfo_unexecuted_blocks=1 00:08:51.847 00:08:51.847 ' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.847 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.848 17:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:00.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.009 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:00.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:00.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:00.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.010 17:24:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:09:00.010 00:09:00.010 --- 10.0.0.2 ping statistics --- 00:09:00.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.010 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:09:00.010 00:09:00.010 --- 10.0.0.1 ping statistics --- 00:09:00.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.010 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1517925 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1517925 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1517925 ']' 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.010 17:24:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.010 [2024-12-06 17:24:51.184758] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:00.010 [2024-12-06 17:24:51.184824] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.010 [2024-12-06 17:24:51.283701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.010 [2024-12-06 17:24:51.336506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.010 [2024-12-06 17:24:51.336555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.010 [2024-12-06 17:24:51.336564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.010 [2024-12-06 17:24:51.336572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.010 [2024-12-06 17:24:51.336578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.010 [2024-12-06 17:24:51.338539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.010 [2024-12-06 17:24:51.338703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.010 [2024-12-06 17:24:51.338865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.010 [2024-12-06 17:24:51.338866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.011 [2024-12-06 17:24:52.063717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.011 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.271 Malloc0 00:09:00.271 [2024-12-06 17:24:52.144084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1518149 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1518149 /var/tmp/bdevperf.sock 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1518149 ']' 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.271 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.272 { 00:09:00.272 "params": { 00:09:00.272 "name": "Nvme$subsystem", 00:09:00.272 "trtype": "$TEST_TRANSPORT", 00:09:00.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.272 "adrfam": "ipv4", 00:09:00.272 "trsvcid": "$NVMF_PORT", 00:09:00.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.272 "hdgst": ${hdgst:-false}, 00:09:00.272 "ddgst": ${ddgst:-false} 00:09:00.272 }, 00:09:00.272 "method": "bdev_nvme_attach_controller" 00:09:00.272 } 00:09:00.272 EOF 00:09:00.272 )") 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:00.272 17:24:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.272 "params": { 00:09:00.272 "name": "Nvme0", 00:09:00.272 "trtype": "tcp", 00:09:00.272 "traddr": "10.0.0.2", 00:09:00.272 "adrfam": "ipv4", 00:09:00.272 "trsvcid": "4420", 00:09:00.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:00.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:00.272 "hdgst": false, 00:09:00.272 "ddgst": false 00:09:00.272 }, 00:09:00.272 "method": "bdev_nvme_attach_controller" 00:09:00.272 }' 00:09:00.272 [2024-12-06 17:24:52.253481] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:00.272 [2024-12-06 17:24:52.253551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518149 ] 00:09:00.531 [2024-12-06 17:24:52.347236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.531 [2024-12-06 17:24:52.401152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.791 Running I/O for 10 seconds... 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.052 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=543 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 543 -ge 100 ']' 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.313 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.313 [2024-12-06 17:24:53.140262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca940 is same with the state(6) to be set 00:09:01.313 [2024-12-06 17:24:53.140900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.313 [2024-12-06 17:24:53.140958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.313 [2024-12-06 17:24:53.140971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.313 [2024-12-06 17:24:53.140981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.313 [2024-12-06 17:24:53.140990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.313 [2024-12-06 17:24:53.140998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.141007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.314 [2024-12-06 17:24:53.141015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.141023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ac20 is same with the state(6) to be set 00:09:01.314 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.314 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:01.314 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.314 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.314 [2024-12-06 17:24:53.155948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236ac20 (9): Bad file descriptor 00:09:01.314 [2024-12-06 17:24:53.156062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.314 [2024-12-06 17:24:53.156717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.314 [2024-12-06 17:24:53.156724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.156983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.156990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.315 [2024-12-06 17:24:53.157027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 [2024-12-06 17:24:53.157231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.315 [2024-12-06 17:24:53.157240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.315 17:24:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:01.315 [2024-12-06 17:24:53.158512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:01.315 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:01.315 00:09:01.315 Latency(us) 00:09:01.315 [2024-12-06T16:24:53.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.315 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:01.315 Job: Nvme0n1 ended in about 0.43 seconds with error 00:09:01.315 Verification LBA range: start 0x0 length 0x400 00:09:01.315 Nvme0n1 : 0.43 1503.72 93.98 150.37 0.00 37498.91 1706.67 34297.17 00:09:01.315 [2024-12-06T16:24:53.381Z] =================================================================================================================== 00:09:01.315 [2024-12-06T16:24:53.381Z] Total : 1503.72 93.98 150.37 0.00 37498.91 1706.67 34297.17 00:09:01.315 [2024-12-06 17:24:53.160752] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.315 [2024-12-06 17:24:53.171840] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1518149 00:09:02.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1518149) - No such process 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:02.260 { 00:09:02.260 "params": { 00:09:02.260 "name": "Nvme$subsystem", 00:09:02.260 "trtype": "$TEST_TRANSPORT", 00:09:02.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:02.260 "adrfam": "ipv4", 00:09:02.260 "trsvcid": "$NVMF_PORT", 00:09:02.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:02.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:02.260 "hdgst": ${hdgst:-false}, 00:09:02.260 "ddgst": ${ddgst:-false} 00:09:02.260 }, 00:09:02.260 "method": "bdev_nvme_attach_controller" 00:09:02.260 } 00:09:02.260 EOF 00:09:02.260 )") 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:02.260 17:24:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:02.260 "params": { 00:09:02.260 "name": "Nvme0", 00:09:02.260 "trtype": "tcp", 00:09:02.260 "traddr": "10.0.0.2", 00:09:02.260 "adrfam": "ipv4", 00:09:02.260 "trsvcid": "4420", 00:09:02.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:02.260 "hdgst": false, 00:09:02.260 "ddgst": false 00:09:02.260 }, 00:09:02.260 "method": "bdev_nvme_attach_controller" 00:09:02.260 }' 00:09:02.260 [2024-12-06 17:24:54.218947] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:02.260 [2024-12-06 17:24:54.219003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518620 ] 00:09:02.260 [2024-12-06 17:24:54.304205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.521 [2024-12-06 17:24:54.339693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.521 Running I/O for 1 seconds... 00:09:03.903 1733.00 IOPS, 108.31 MiB/s 00:09:03.903 Latency(us) 00:09:03.903 [2024-12-06T16:24:55.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.903 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:03.903 Verification LBA range: start 0x0 length 0x400 00:09:03.903 Nvme0n1 : 1.01 1770.81 110.68 0.00 0.00 35453.80 1686.19 34297.17 00:09:03.903 [2024-12-06T16:24:55.969Z] =================================================================================================================== 00:09:03.903 [2024-12-06T16:24:55.969Z] Total : 1770.81 110.68 0.00 0.00 35453.80 1686.19 34297.17 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.903 rmmod nvme_tcp 00:09:03.903 rmmod nvme_fabrics 00:09:03.903 rmmod nvme_keyring 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1517925 ']' 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1517925 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1517925 ']' 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1517925 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517925 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517925' 00:09:03.903 killing process with pid 1517925 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1517925 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1517925 00:09:03.903 [2024-12-06 17:24:55.938461] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.903 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.163 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.163 17:24:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:06.078 00:09:06.078 real 0m14.638s 00:09:06.078 user 0m23.178s 00:09:06.078 sys 0m6.780s 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.078 ************************************ 00:09:06.078 END TEST nvmf_host_management 00:09:06.078 ************************************ 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.078 ************************************ 00:09:06.078 START TEST nvmf_lvol 00:09:06.078 ************************************ 00:09:06.078 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:06.340 * Looking for test storage... 00:09:06.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.340 --rc genhtml_branch_coverage=1 00:09:06.340 --rc genhtml_function_coverage=1 00:09:06.340 --rc genhtml_legend=1 00:09:06.340 --rc geninfo_all_blocks=1 00:09:06.340 --rc geninfo_unexecuted_blocks=1 00:09:06.340 00:09:06.340 ' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.340 --rc genhtml_branch_coverage=1 00:09:06.340 --rc genhtml_function_coverage=1 00:09:06.340 --rc genhtml_legend=1 00:09:06.340 --rc geninfo_all_blocks=1 00:09:06.340 --rc geninfo_unexecuted_blocks=1 00:09:06.340 00:09:06.340 ' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.340 --rc genhtml_branch_coverage=1 00:09:06.340 --rc genhtml_function_coverage=1 00:09:06.340 --rc genhtml_legend=1 00:09:06.340 --rc geninfo_all_blocks=1 00:09:06.340 --rc geninfo_unexecuted_blocks=1 00:09:06.340 00:09:06.340 ' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.340 --rc genhtml_branch_coverage=1 00:09:06.340 --rc genhtml_function_coverage=1 00:09:06.340 --rc genhtml_legend=1 00:09:06.340 --rc geninfo_all_blocks=1 00:09:06.340 --rc geninfo_unexecuted_blocks=1 00:09:06.340 00:09:06.340 ' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.340 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.341 17:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:14.486 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:14.486 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:14.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:14.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.486 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:09:14.487 00:09:14.487 --- 10.0.0.2 ping statistics --- 00:09:14.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.487 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:09:14.487 00:09:14.487 --- 10.0.0.1 ping statistics --- 00:09:14.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.487 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1523030 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1523030 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1523030 ']' 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.487 17:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.487 [2024-12-06 17:25:05.937160] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:14.487 [2024-12-06 17:25:05.937234] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.487 [2024-12-06 17:25:06.034522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.487 [2024-12-06 17:25:06.086783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.487 [2024-12-06 17:25:06.086837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.487 [2024-12-06 17:25:06.086846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.487 [2024-12-06 17:25:06.086853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.487 [2024-12-06 17:25:06.086860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.487 [2024-12-06 17:25:06.088680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.487 [2024-12-06 17:25:06.088794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.487 [2024-12-06 17:25:06.088795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.748 17:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.008 [2024-12-06 17:25:06.970901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.008 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.269 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:15.269 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.530 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:15.530 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:15.790 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:16.051 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=504f5d86-85e3-43ec-8211-cf9155bde5ea 00:09:16.051 17:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 504f5d86-85e3-43ec-8211-cf9155bde5ea lvol 20 00:09:16.051 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c0272597-05a7-40c7-97c1-5db529866db9 00:09:16.051 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.311 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c0272597-05a7-40c7-97c1-5db529866db9 00:09:16.571 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.571 [2024-12-06 17:25:08.616324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.571 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.830 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1523719 00:09:16.830 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:16.830 17:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:17.833 17:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c0272597-05a7-40c7-97c1-5db529866db9 MY_SNAPSHOT 00:09:18.092 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bc8cdd59-ef06-49e6-8449-df2c572a19c1 00:09:18.092 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c0272597-05a7-40c7-97c1-5db529866db9 30 00:09:18.351 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bc8cdd59-ef06-49e6-8449-df2c572a19c1 MY_CLONE 00:09:18.610 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=23cc457a-7901-4504-888b-6f940d82eed5 00:09:18.610 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 23cc457a-7901-4504-888b-6f940d82eed5 00:09:18.869 17:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1523719 00:09:27.004 Initializing NVMe Controllers 00:09:27.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:27.004 Controller IO queue size 128, less than required. 00:09:27.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:27.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:27.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:27.004 Initialization complete. Launching workers. 00:09:27.004 ======================================================== 00:09:27.004 Latency(us) 00:09:27.004 Device Information : IOPS MiB/s Average min max 00:09:27.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15928.60 62.22 8038.08 1492.98 52445.76 00:09:27.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17207.80 67.22 7439.29 379.82 65741.92 00:09:27.004 ======================================================== 00:09:27.004 Total : 33136.40 129.44 7727.13 379.82 65741.92 00:09:27.004 00:09:27.264 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:27.264 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c0272597-05a7-40c7-97c1-5db529866db9 00:09:27.526 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 504f5d86-85e3-43ec-8211-cf9155bde5ea 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.787 rmmod nvme_tcp 00:09:27.787 rmmod nvme_fabrics 00:09:27.787 rmmod nvme_keyring 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1523030 ']' 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1523030 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1523030 ']' 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1523030 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523030 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523030' 00:09:27.787 killing process with pid 1523030 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1523030 00:09:27.787 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1523030 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.048 17:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.962 17:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.962 00:09:29.962 real 0m23.846s 00:09:29.962 user 1m4.424s 00:09:29.962 sys 0m8.617s 00:09:29.962 17:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.962 17:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:29.962 ************************************ 00:09:29.962 END TEST nvmf_lvol 00:09:29.962 ************************************ 00:09:29.962 17:25:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:29.962 17:25:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.962 17:25:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.962 17:25:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.223 ************************************ 00:09:30.223 START TEST nvmf_lvs_grow 00:09:30.223 ************************************ 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:30.223 * Looking for test storage... 00:09:30.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.223 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.224 --rc genhtml_branch_coverage=1 00:09:30.224 --rc genhtml_function_coverage=1 00:09:30.224 --rc genhtml_legend=1 00:09:30.224 --rc geninfo_all_blocks=1 00:09:30.224 --rc geninfo_unexecuted_blocks=1 00:09:30.224 00:09:30.224 ' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.224 --rc genhtml_branch_coverage=1 00:09:30.224 --rc genhtml_function_coverage=1 00:09:30.224 --rc genhtml_legend=1 00:09:30.224 --rc geninfo_all_blocks=1 00:09:30.224 --rc geninfo_unexecuted_blocks=1 00:09:30.224 00:09:30.224 ' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.224 --rc genhtml_branch_coverage=1 00:09:30.224 --rc genhtml_function_coverage=1 00:09:30.224 --rc genhtml_legend=1 00:09:30.224 --rc geninfo_all_blocks=1 00:09:30.224 --rc geninfo_unexecuted_blocks=1 00:09:30.224 00:09:30.224 ' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.224 --rc genhtml_branch_coverage=1 00:09:30.224 --rc genhtml_function_coverage=1 00:09:30.224 --rc genhtml_legend=1 00:09:30.224 --rc geninfo_all_blocks=1 00:09:30.224 --rc geninfo_unexecuted_blocks=1 00:09:30.224 00:09:30.224 ' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.224 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.486 17:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:38.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:38.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:38.692 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:38.692 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:38.692 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:09:38.693 00:09:38.693 --- 10.0.0.2 ping statistics --- 00:09:38.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.693 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:09:38.693 00:09:38.693 --- 10.0.0.1 ping statistics --- 00:09:38.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.693 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1530090 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1530090 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1530090 ']' 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.693 17:25:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.693 [2024-12-06 17:25:29.843255] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:38.693 [2024-12-06 17:25:29.843321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.693 [2024-12-06 17:25:29.939975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.693 [2024-12-06 17:25:29.991316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.693 [2024-12-06 17:25:29.991367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.693 [2024-12-06 17:25:29.991376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.693 [2024-12-06 17:25:29.991384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.693 [2024-12-06 17:25:29.991390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.693 [2024-12-06 17:25:29.992140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.693 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:38.954 [2024-12-06 17:25:30.864567] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.954 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:38.954 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.954 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.954 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.954 ************************************ 00:09:38.954 START TEST lvs_grow_clean 00:09:38.954 ************************************ 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.955 17:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:39.215 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:39.215 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:39.475 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:39.475 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:39.475 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:39.735 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:39.735 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:39.735 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da lvol 150 00:09:39.735 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=80ee756c-634d-4e6d-88be-17092527f6b7 00:09:39.735 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:39.735 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:39.995 [2024-12-06 17:25:31.922173] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:39.995 [2024-12-06 17:25:31.922255] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:39.995 true 00:09:39.995 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:39.995 17:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:40.256 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:40.256 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:40.256 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80ee756c-634d-4e6d-88be-17092527f6b7 00:09:40.517 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:40.780 [2024-12-06 17:25:32.644459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1530804 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1530804 /var/tmp/bdevperf.sock 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1530804 ']' 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.780 17:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:41.040 [2024-12-06 17:25:32.862485] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:41.040 [2024-12-06 17:25:32.862548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530804 ] 00:09:41.040 [2024-12-06 17:25:32.951502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.040 [2024-12-06 17:25:33.003310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.985 17:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.985 17:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:41.985 17:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:42.247 Nvme0n1 00:09:42.247 17:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:42.247 [ 00:09:42.247 { 00:09:42.247 "name": "Nvme0n1", 00:09:42.247 "aliases": [ 00:09:42.247 "80ee756c-634d-4e6d-88be-17092527f6b7" 00:09:42.247 ], 00:09:42.247 "product_name": "NVMe disk", 00:09:42.247 "block_size": 4096, 00:09:42.247 "num_blocks": 38912, 00:09:42.247 "uuid": "80ee756c-634d-4e6d-88be-17092527f6b7", 00:09:42.247 "numa_id": 0, 00:09:42.247 "assigned_rate_limits": { 00:09:42.247 "rw_ios_per_sec": 0, 00:09:42.247 "rw_mbytes_per_sec": 0, 00:09:42.247 "r_mbytes_per_sec": 0, 00:09:42.247 "w_mbytes_per_sec": 0 00:09:42.247 }, 00:09:42.247 "claimed": false, 00:09:42.247 "zoned": false, 00:09:42.247 "supported_io_types": { 00:09:42.247 "read": true, 00:09:42.247 "write": true, 00:09:42.247 "unmap": true, 00:09:42.247 "flush": true, 00:09:42.247 "reset": true, 00:09:42.247 "nvme_admin": true, 00:09:42.247 "nvme_io": true, 00:09:42.247 "nvme_io_md": false, 00:09:42.247 "write_zeroes": true, 00:09:42.247 "zcopy": false, 00:09:42.247 "get_zone_info": false, 00:09:42.247 "zone_management": false, 00:09:42.247 "zone_append": false, 00:09:42.247 "compare": true, 00:09:42.247 "compare_and_write": true, 00:09:42.247 "abort": true, 00:09:42.247 "seek_hole": false, 00:09:42.247 "seek_data": false, 00:09:42.247 "copy": true, 00:09:42.247 "nvme_iov_md": false 00:09:42.247 }, 00:09:42.247 "memory_domains": [ 00:09:42.247 { 00:09:42.247 "dma_device_id": "system", 00:09:42.247 "dma_device_type": 1 00:09:42.247 } 00:09:42.247 ], 00:09:42.247 "driver_specific": { 00:09:42.247 "nvme": [ 00:09:42.247 { 00:09:42.247 "trid": { 00:09:42.247 "trtype": "TCP", 00:09:42.247 "adrfam": "IPv4", 00:09:42.247 "traddr": "10.0.0.2", 00:09:42.247 "trsvcid": "4420", 00:09:42.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:42.247 }, 00:09:42.247 "ctrlr_data": { 00:09:42.247 "cntlid": 1, 00:09:42.247 "vendor_id": "0x8086", 00:09:42.247 "model_number": "SPDK bdev Controller", 00:09:42.247 "serial_number": "SPDK0", 00:09:42.247 "firmware_revision": "25.01", 00:09:42.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.247 "oacs": { 00:09:42.247 "security": 0, 00:09:42.247 "format": 0, 00:09:42.247 "firmware": 0, 00:09:42.247 "ns_manage": 0 00:09:42.247 }, 00:09:42.247 "multi_ctrlr": true, 00:09:42.247 "ana_reporting": false 00:09:42.247 }, 00:09:42.247 "vs": { 00:09:42.247 "nvme_version": "1.3" 00:09:42.247 }, 00:09:42.247 "ns_data": { 00:09:42.247 "id": 1, 00:09:42.247 "can_share": true 00:09:42.247 } 00:09:42.247 } 00:09:42.247 ], 00:09:42.247 "mp_policy": "active_passive" 00:09:42.247 } 00:09:42.247 } 00:09:42.247 ] 00:09:42.247 17:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1531138 00:09:42.247 17:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:42.247 17:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:42.508 Running I/O for 10 seconds... 00:09:43.449 Latency(us) 00:09:43.449 [2024-12-06T16:25:35.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.449 Nvme0n1 : 1.00 25258.00 98.66 0.00 0.00 0.00 0.00 0.00 00:09:43.449 [2024-12-06T16:25:35.515Z] =================================================================================================================== 00:09:43.449 [2024-12-06T16:25:35.515Z] Total : 25258.00 98.66 0.00 0.00 0.00 0.00 0.00 00:09:43.449 00:09:44.390 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:44.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.390 Nvme0n1 : 2.00 25428.50 99.33 0.00 0.00 0.00 0.00 0.00 00:09:44.390 [2024-12-06T16:25:36.456Z] =================================================================================================================== 00:09:44.390 [2024-12-06T16:25:36.456Z] Total : 25428.50 99.33 0.00 0.00 0.00 0.00 0.00 00:09:44.390 00:09:44.651 true 00:09:44.651 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:44.651 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:44.651 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:44.651 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:44.651 17:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1531138 00:09:45.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.612 Nvme0n1 : 3.00 25502.00 99.62 0.00 0.00 0.00 0.00 0.00 00:09:45.612 [2024-12-06T16:25:37.678Z] =================================================================================================================== 00:09:45.612 [2024-12-06T16:25:37.678Z] Total : 25502.00 99.62 0.00 0.00 0.00 0.00 0.00 00:09:45.612 00:09:46.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.553 Nvme0n1 : 4.00 25545.75 99.79 0.00 0.00 0.00 0.00 0.00 00:09:46.553 [2024-12-06T16:25:38.619Z] =================================================================================================================== 00:09:46.553 [2024-12-06T16:25:38.619Z] Total : 25545.75 99.79 0.00 0.00 0.00 0.00 0.00 00:09:46.553 00:09:47.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.496 Nvme0n1 : 5.00 25580.80 99.92 0.00 0.00 0.00 0.00 0.00 00:09:47.496 [2024-12-06T16:25:39.562Z] =================================================================================================================== 00:09:47.496 [2024-12-06T16:25:39.562Z] Total : 25580.80 99.92 0.00 0.00 0.00 0.00 0.00 00:09:47.496 00:09:48.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.438 Nvme0n1 : 6.00 25604.83 100.02 0.00 0.00 0.00 0.00 0.00 00:09:48.438 [2024-12-06T16:25:40.504Z] =================================================================================================================== 00:09:48.438 [2024-12-06T16:25:40.504Z] Total : 25604.83 100.02 0.00 0.00 0.00 0.00 0.00 00:09:48.438 00:09:49.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.381 Nvme0n1 : 7.00 25622.43 100.09 0.00 0.00 0.00 0.00 0.00 00:09:49.381 [2024-12-06T16:25:41.447Z] =================================================================================================================== 00:09:49.381 [2024-12-06T16:25:41.447Z] Total : 25622.43 100.09 0.00 0.00 0.00 0.00 0.00 00:09:49.381 00:09:50.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.766 Nvme0n1 : 8.00 25635.38 100.14 0.00 0.00 0.00 0.00 0.00 00:09:50.766 [2024-12-06T16:25:42.832Z] =================================================================================================================== 00:09:50.766 [2024-12-06T16:25:42.832Z] Total : 25635.38 100.14 0.00 0.00 0.00 0.00 0.00 00:09:50.766 00:09:51.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.709 Nvme0n1 : 9.00 25652.44 100.20 0.00 0.00 0.00 0.00 0.00 00:09:51.709 [2024-12-06T16:25:43.775Z] =================================================================================================================== 00:09:51.709 [2024-12-06T16:25:43.775Z] Total : 25652.44 100.20 0.00 0.00 0.00 0.00 0.00 00:09:51.709 00:09:52.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.653 Nvme0n1 : 10.00 25666.40 100.26 0.00 0.00 0.00 0.00 0.00 00:09:52.653 [2024-12-06T16:25:44.719Z] =================================================================================================================== 00:09:52.653 [2024-12-06T16:25:44.719Z] Total : 25666.40 100.26 0.00 0.00 0.00 0.00 0.00 00:09:52.653 00:09:52.653 00:09:52.653 Latency(us) 00:09:52.653 [2024-12-06T16:25:44.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.653 Nvme0n1 : 10.00 25664.36 100.25 0.00 0.00 4983.77 2525.87 12561.07 00:09:52.653 [2024-12-06T16:25:44.719Z] =================================================================================================================== 00:09:52.653 [2024-12-06T16:25:44.719Z] Total : 25664.36 100.25 0.00 0.00 4983.77 2525.87 12561.07 00:09:52.653 { 00:09:52.653 "results": [ 00:09:52.653 { 00:09:52.653 "job": "Nvme0n1", 00:09:52.653 "core_mask": "0x2", 00:09:52.653 "workload": "randwrite", 00:09:52.653 "status": "finished", 00:09:52.653 "queue_depth": 128, 00:09:52.653 "io_size": 4096, 00:09:52.653 "runtime": 10.003327, 00:09:52.653 "iops": 25664.36146693995, 00:09:52.653 "mibps": 100.25141198023418, 00:09:52.653 "io_failed": 0, 00:09:52.653 "io_timeout": 0, 00:09:52.653 "avg_latency_us": 4983.765017067284, 00:09:52.653 "min_latency_us": 2525.866666666667, 00:09:52.653 "max_latency_us": 12561.066666666668 00:09:52.653 } 00:09:52.653 ], 00:09:52.653 "core_count": 1 00:09:52.653 } 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1530804 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1530804 ']' 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1530804 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1530804 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1530804' 00:09:52.653 killing process with pid 1530804 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1530804 00:09:52.653 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.653 00:09:52.653 Latency(us) 00:09:52.653 [2024-12-06T16:25:44.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.653 [2024-12-06T16:25:44.719Z] =================================================================================================================== 00:09:52.653 [2024-12-06T16:25:44.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1530804 00:09:52.653 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.915 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:52.915 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:52.915 17:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:53.176 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:53.176 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:53.176 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.436 [2024-12-06 17:25:45.309852] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.436 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:53.437 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:53.696 request: 00:09:53.696 { 00:09:53.696 "uuid": "6bb9e624-1df7-485b-90ac-4a6437e1f9da", 00:09:53.696 "method": "bdev_lvol_get_lvstores", 00:09:53.696 "req_id": 1 00:09:53.696 } 00:09:53.696 Got JSON-RPC error response 00:09:53.696 response: 00:09:53.696 { 00:09:53.696 "code": -19, 00:09:53.696 "message": "No such device" 00:09:53.696 } 00:09:53.696 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:53.696 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:53.696 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:53.696 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:53.697 aio_bdev 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80ee756c-634d-4e6d-88be-17092527f6b7 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=80ee756c-634d-4e6d-88be-17092527f6b7 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.697 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:53.957 17:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80ee756c-634d-4e6d-88be-17092527f6b7 -t 2000 00:09:54.217 [ 00:09:54.217 { 00:09:54.217 "name": "80ee756c-634d-4e6d-88be-17092527f6b7", 00:09:54.217 "aliases": [ 00:09:54.217 "lvs/lvol" 00:09:54.217 ], 00:09:54.217 "product_name": "Logical Volume", 00:09:54.217 "block_size": 4096, 00:09:54.217 "num_blocks": 38912, 00:09:54.217 "uuid": "80ee756c-634d-4e6d-88be-17092527f6b7", 00:09:54.217 "assigned_rate_limits": { 00:09:54.217 "rw_ios_per_sec": 0, 00:09:54.217 "rw_mbytes_per_sec": 0, 00:09:54.217 "r_mbytes_per_sec": 0, 00:09:54.217 "w_mbytes_per_sec": 0 00:09:54.217 }, 00:09:54.217 "claimed": false, 00:09:54.217 "zoned": false, 00:09:54.217 "supported_io_types": { 00:09:54.217 "read": true, 00:09:54.217 "write": true, 00:09:54.217 "unmap": true, 00:09:54.217 "flush": false, 00:09:54.217 "reset": true, 00:09:54.217 "nvme_admin": false, 00:09:54.217 "nvme_io": false, 00:09:54.217 "nvme_io_md": false, 00:09:54.217 "write_zeroes": true, 00:09:54.217 "zcopy": false, 00:09:54.217 "get_zone_info": false, 00:09:54.217 "zone_management": false, 00:09:54.217 "zone_append": false, 00:09:54.217 "compare": false, 00:09:54.217 "compare_and_write": false, 00:09:54.217 "abort": false, 00:09:54.217 "seek_hole": true, 00:09:54.217 "seek_data": true, 00:09:54.217 "copy": false, 00:09:54.217 "nvme_iov_md": false 00:09:54.217 }, 00:09:54.217 "driver_specific": { 00:09:54.217 "lvol": { 00:09:54.217 "lvol_store_uuid": "6bb9e624-1df7-485b-90ac-4a6437e1f9da", 00:09:54.217 "base_bdev": "aio_bdev", 00:09:54.217 "thin_provision": false, 00:09:54.217 "num_allocated_clusters": 38, 00:09:54.217 "snapshot": false, 00:09:54.217 "clone": false, 00:09:54.217 "esnap_clone": false 00:09:54.217 } 00:09:54.217 } 00:09:54.217 } 00:09:54.217 ] 00:09:54.217 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:54.217 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:54.217 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:54.477 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:54.477 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:54.477 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:54.477 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:54.477 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80ee756c-634d-4e6d-88be-17092527f6b7 00:09:54.738 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bb9e624-1df7-485b-90ac-4a6437e1f9da 00:09:54.999 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:54.999 17:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:54.999 00:09:54.999 real 0m16.065s 00:09:54.999 user 0m15.756s 00:09:54.999 sys 0m1.451s 00:09:54.999 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.999 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:54.999 ************************************ 00:09:54.999 END TEST lvs_grow_clean 00:09:54.999 ************************************ 00:09:54.999 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:54.999 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.999 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.999 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:55.260 ************************************ 00:09:55.260 START TEST lvs_grow_dirty 00:09:55.260 ************************************ 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:55.260 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:55.520 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:09:55.520 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:09:55.520 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:55.780 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:55.780 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:55.780 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e lvol 150 00:09:55.780 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=149973c6-f768-4429-afa1-4ce596b45fe1 00:09:55.780 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:55.780 17:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:56.039 [2024-12-06 17:25:47.990296] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:56.039 [2024-12-06 17:25:47.990340] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:56.039 true 00:09:56.039 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:56.039 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:09:56.299 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:56.299 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:56.299 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 149973c6-f768-4429-afa1-4ce596b45fe1 00:09:56.559 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:56.821 [2024-12-06 17:25:48.636164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1533901 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1533901 /var/tmp/bdevperf.sock 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1533901 ']' 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:56.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.821 17:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:56.821 [2024-12-06 17:25:48.868666] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:09:56.821 [2024-12-06 17:25:48.868716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533901 ] 00:09:57.081 [2024-12-06 17:25:48.951509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.081 [2024-12-06 17:25:48.981143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.652 17:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.652 17:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:57.652 17:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:58.224 Nvme0n1 00:09:58.224 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:58.224 [ 00:09:58.224 { 00:09:58.224 "name": "Nvme0n1", 00:09:58.224 "aliases": [ 00:09:58.224 "149973c6-f768-4429-afa1-4ce596b45fe1" 00:09:58.224 ], 00:09:58.224 "product_name": "NVMe disk", 00:09:58.224 "block_size": 4096, 00:09:58.224 "num_blocks": 38912, 00:09:58.224 "uuid": "149973c6-f768-4429-afa1-4ce596b45fe1", 00:09:58.224 "numa_id": 0, 00:09:58.224 "assigned_rate_limits": { 00:09:58.224 "rw_ios_per_sec": 0, 00:09:58.224 "rw_mbytes_per_sec": 0, 00:09:58.224 "r_mbytes_per_sec": 0, 00:09:58.224 "w_mbytes_per_sec": 0 00:09:58.224 }, 00:09:58.224 "claimed": false, 00:09:58.224 "zoned": false, 00:09:58.224 "supported_io_types": { 00:09:58.224 "read": true, 00:09:58.224 "write": true, 00:09:58.224 "unmap": true, 00:09:58.224 "flush": true, 00:09:58.224 "reset": true, 00:09:58.224 "nvme_admin": true, 00:09:58.224 "nvme_io": true, 00:09:58.224 "nvme_io_md": false, 00:09:58.224 "write_zeroes": true, 00:09:58.224 "zcopy": false, 00:09:58.224 "get_zone_info": false, 00:09:58.224 "zone_management": false, 00:09:58.224 "zone_append": false, 00:09:58.224 "compare": true, 00:09:58.224 "compare_and_write": true, 00:09:58.224 "abort": true, 00:09:58.224 "seek_hole": false, 00:09:58.224 "seek_data": false, 00:09:58.224 "copy": true, 00:09:58.224 "nvme_iov_md": false 00:09:58.224 }, 00:09:58.224 "memory_domains": [ 00:09:58.224 { 00:09:58.224 "dma_device_id": "system", 00:09:58.224 "dma_device_type": 1 00:09:58.224 } 00:09:58.224 ], 00:09:58.224 "driver_specific": { 00:09:58.224 "nvme": [ 00:09:58.224 { 00:09:58.224 "trid": { 00:09:58.224 "trtype": "TCP", 00:09:58.224 "adrfam": "IPv4", 00:09:58.224 "traddr": "10.0.0.2", 00:09:58.224 "trsvcid": "4420", 00:09:58.224 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:58.224 }, 00:09:58.224 "ctrlr_data": { 00:09:58.224 "cntlid": 1, 00:09:58.224 "vendor_id": "0x8086", 00:09:58.224 "model_number": "SPDK bdev Controller", 00:09:58.224 "serial_number": "SPDK0", 00:09:58.224 "firmware_revision": "25.01", 00:09:58.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:58.224 "oacs": { 00:09:58.224 "security": 0, 00:09:58.224 "format": 0, 00:09:58.224 "firmware": 0, 00:09:58.224 "ns_manage": 0 00:09:58.224 }, 00:09:58.224 "multi_ctrlr": true, 00:09:58.224 "ana_reporting": false 00:09:58.224 }, 00:09:58.224 "vs": { 00:09:58.224 "nvme_version": "1.3" 00:09:58.224 }, 00:09:58.224 "ns_data": { 00:09:58.224 "id": 1, 00:09:58.224 "can_share": true 00:09:58.224 } 00:09:58.224 } 00:09:58.224 ], 00:09:58.224 "mp_policy": "active_passive" 00:09:58.224 } 00:09:58.224 } 00:09:58.224 ] 00:09:58.224 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1534241 00:09:58.224 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:58.224 17:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.485 Running I/O for 10 seconds... 00:09:59.427 Latency(us) 00:09:59.427 [2024-12-06T16:25:51.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.427 Nvme0n1 : 1.00 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:09:59.427 [2024-12-06T16:25:51.493Z] =================================================================================================================== 00:09:59.427 [2024-12-06T16:25:51.493Z] Total : 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:09:59.427 00:10:00.369 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:00.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.369 Nvme0n1 : 2.00 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:10:00.369 [2024-12-06T16:25:52.435Z] =================================================================================================================== 00:10:00.369 [2024-12-06T16:25:52.435Z] Total : 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:10:00.369 00:10:00.369 true 00:10:00.631 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:00.631 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:00.631 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:00.631 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:00.631 17:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1534241 00:10:01.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.573 Nvme0n1 : 3.00 25436.00 99.36 0.00 0.00 0.00 0.00 0.00 00:10:01.573 [2024-12-06T16:25:53.639Z] =================================================================================================================== 00:10:01.573 [2024-12-06T16:25:53.639Z] Total : 25436.00 99.36 0.00 0.00 0.00 0.00 0.00 00:10:01.573 00:10:02.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.513 Nvme0n1 : 4.00 25492.75 99.58 0.00 0.00 0.00 0.00 0.00 00:10:02.513 [2024-12-06T16:25:54.579Z] =================================================================================================================== 00:10:02.513 [2024-12-06T16:25:54.579Z] Total : 25492.75 99.58 0.00 0.00 0.00 0.00 0.00 00:10:02.513 00:10:03.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.472 Nvme0n1 : 5.00 25538.40 99.76 0.00 0.00 0.00 0.00 0.00 00:10:03.472 [2024-12-06T16:25:55.538Z] =================================================================================================================== 00:10:03.472 [2024-12-06T16:25:55.538Z] Total : 25538.40 99.76 0.00 0.00 0.00 0.00 0.00 00:10:03.472 00:10:04.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:04.416 Nvme0n1 : 6.00 25565.00 99.86 0.00 0.00 0.00 0.00 0.00 00:10:04.416 [2024-12-06T16:25:56.482Z] =================================================================================================================== 00:10:04.416 [2024-12-06T16:25:56.482Z] Total : 25565.00 99.86 0.00 0.00 0.00 0.00 0.00 00:10:04.416 00:10:05.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.358 Nvme0n1 : 7.00 25592.43 99.97 0.00 0.00 0.00 0.00 0.00 00:10:05.358 [2024-12-06T16:25:57.424Z] =================================================================================================================== 00:10:05.358 [2024-12-06T16:25:57.424Z] Total : 25592.43 99.97 0.00 0.00 0.00 0.00 0.00 00:10:05.358 00:10:06.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.306 Nvme0n1 : 8.00 25609.38 100.04 0.00 0.00 0.00 0.00 0.00 00:10:06.306 [2024-12-06T16:25:58.372Z] =================================================================================================================== 00:10:06.306 [2024-12-06T16:25:58.372Z] Total : 25609.38 100.04 0.00 0.00 0.00 0.00 0.00 00:10:06.306 00:10:07.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.693 Nvme0n1 : 9.00 25622.22 100.09 0.00 0.00 0.00 0.00 0.00 00:10:07.693 [2024-12-06T16:25:59.759Z] =================================================================================================================== 00:10:07.693 [2024-12-06T16:25:59.759Z] Total : 25622.22 100.09 0.00 0.00 0.00 0.00 0.00 00:10:07.693 00:10:08.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.636 Nvme0n1 : 10.00 25632.70 100.13 0.00 0.00 0.00 0.00 0.00 00:10:08.636 [2024-12-06T16:26:00.702Z] =================================================================================================================== 00:10:08.636 [2024-12-06T16:26:00.702Z] Total : 25632.70 100.13 0.00 0.00 0.00 0.00 0.00 00:10:08.636 00:10:08.636 00:10:08.636 Latency(us) 00:10:08.636 [2024-12-06T16:26:00.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.637 Nvme0n1 : 10.00 25631.20 100.12 0.00 0.00 4990.65 2962.77 12724.91 00:10:08.637 [2024-12-06T16:26:00.703Z] =================================================================================================================== 00:10:08.637 [2024-12-06T16:26:00.703Z] Total : 25631.20 100.12 0.00 0.00 4990.65 2962.77 12724.91 00:10:08.637 { 00:10:08.637 "results": [ 00:10:08.637 { 00:10:08.637 "job": "Nvme0n1", 00:10:08.637 "core_mask": "0x2", 00:10:08.637 "workload": "randwrite", 00:10:08.637 "status": "finished", 00:10:08.637 "queue_depth": 128, 00:10:08.637 "io_size": 4096, 00:10:08.637 "runtime": 10.003123, 00:10:08.637 "iops": 25631.19537768355, 00:10:08.637 "mibps": 100.12185694407637, 00:10:08.637 "io_failed": 0, 00:10:08.637 "io_timeout": 0, 00:10:08.637 "avg_latency_us": 4990.6530758109975, 00:10:08.637 "min_latency_us": 2962.7733333333335, 00:10:08.637 "max_latency_us": 12724.906666666666 00:10:08.637 } 00:10:08.637 ], 00:10:08.637 "core_count": 1 00:10:08.637 } 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1533901 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1533901 ']' 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1533901 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533901 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533901' 00:10:08.637 killing process with pid 1533901 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1533901 00:10:08.637 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.637 00:10:08.637 Latency(us) 00:10:08.637 [2024-12-06T16:26:00.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.637 [2024-12-06T16:26:00.703Z] =================================================================================================================== 00:10:08.637 [2024-12-06T16:26:00.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1533901 00:10:08.637 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.899 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:08.899 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:08.899 17:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:09.173 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:09.173 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:09.173 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1530090 00:10:09.173 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1530090 00:10:09.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1530090 Killed "${NVMF_APP[@]}" "$@" 00:10:09.173 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1536447 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1536447 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1536447 ']' 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.174 17:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:09.437 [2024-12-06 17:26:01.248586] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:09.437 [2024-12-06 17:26:01.248672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.437 [2024-12-06 17:26:01.339646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.437 [2024-12-06 17:26:01.370441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.437 [2024-12-06 17:26:01.370470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.437 [2024-12-06 17:26:01.370475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.437 [2024-12-06 17:26:01.370481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.437 [2024-12-06 17:26:01.370485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.437 [2024-12-06 17:26:01.370934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.006 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:10.266 [2024-12-06 17:26:02.222102] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:10.266 [2024-12-06 17:26:02.222176] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:10.266 [2024-12-06 17:26:02.222200] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 149973c6-f768-4429-afa1-4ce596b45fe1 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=149973c6-f768-4429-afa1-4ce596b45fe1 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.266 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:10.526 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 149973c6-f768-4429-afa1-4ce596b45fe1 -t 2000 00:10:10.526 [ 00:10:10.526 { 00:10:10.526 "name": "149973c6-f768-4429-afa1-4ce596b45fe1", 00:10:10.526 "aliases": [ 00:10:10.526 "lvs/lvol" 00:10:10.526 ], 00:10:10.526 "product_name": "Logical Volume", 00:10:10.526 "block_size": 4096, 00:10:10.526 "num_blocks": 38912, 00:10:10.526 "uuid": "149973c6-f768-4429-afa1-4ce596b45fe1", 00:10:10.526 "assigned_rate_limits": { 00:10:10.526 "rw_ios_per_sec": 0, 00:10:10.526 "rw_mbytes_per_sec": 0, 00:10:10.526 "r_mbytes_per_sec": 0, 00:10:10.526 "w_mbytes_per_sec": 0 00:10:10.526 }, 00:10:10.526 "claimed": false, 00:10:10.526 "zoned": false, 00:10:10.526 "supported_io_types": { 00:10:10.526 "read": true, 00:10:10.526 "write": true, 00:10:10.526 "unmap": true, 00:10:10.526 "flush": false, 00:10:10.526 "reset": true, 00:10:10.526 "nvme_admin": false, 00:10:10.526 "nvme_io": false, 00:10:10.526 "nvme_io_md": false, 00:10:10.526 "write_zeroes": true, 00:10:10.526 "zcopy": false, 00:10:10.526 "get_zone_info": false, 00:10:10.526 "zone_management": false, 00:10:10.526 "zone_append": false, 00:10:10.526 "compare": false, 00:10:10.526 "compare_and_write": false, 00:10:10.526 "abort": false, 00:10:10.526 "seek_hole": true, 00:10:10.526 "seek_data": true, 00:10:10.526 "copy": false, 00:10:10.526 "nvme_iov_md": false 00:10:10.526 }, 00:10:10.526 "driver_specific": { 00:10:10.526 "lvol": { 00:10:10.526 "lvol_store_uuid": "18fb3dba-5f22-46b1-ac5f-63cae8a53d2e", 00:10:10.526 "base_bdev": "aio_bdev", 00:10:10.526 "thin_provision": false, 00:10:10.526 "num_allocated_clusters": 38, 00:10:10.526 "snapshot": false, 00:10:10.526 "clone": false, 00:10:10.526 "esnap_clone": false 00:10:10.526 } 00:10:10.526 } 00:10:10.526 } 00:10:10.526 ] 00:10:10.526 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:10.526 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:10.787 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:10.787 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:10.787 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:10.787 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:11.047 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:11.047 17:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:11.047 [2024-12-06 17:26:03.066703] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:11.308 request: 00:10:11.308 { 00:10:11.308 "uuid": "18fb3dba-5f22-46b1-ac5f-63cae8a53d2e", 00:10:11.308 "method": "bdev_lvol_get_lvstores", 00:10:11.308 "req_id": 1 00:10:11.308 } 00:10:11.308 Got JSON-RPC error response 00:10:11.308 response: 00:10:11.308 { 00:10:11.308 "code": -19, 00:10:11.308 "message": "No such device" 00:10:11.308 } 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.308 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:11.586 aio_bdev 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 149973c6-f768-4429-afa1-4ce596b45fe1 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=149973c6-f768-4429-afa1-4ce596b45fe1 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:11.586 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 149973c6-f768-4429-afa1-4ce596b45fe1 -t 2000 00:10:11.875 [ 00:10:11.875 { 00:10:11.875 "name": "149973c6-f768-4429-afa1-4ce596b45fe1", 00:10:11.875 "aliases": [ 00:10:11.875 "lvs/lvol" 00:10:11.875 ], 00:10:11.875 "product_name": "Logical Volume", 00:10:11.875 "block_size": 4096, 00:10:11.875 "num_blocks": 38912, 00:10:11.875 "uuid": "149973c6-f768-4429-afa1-4ce596b45fe1", 00:10:11.875 "assigned_rate_limits": { 00:10:11.875 "rw_ios_per_sec": 0, 00:10:11.875 "rw_mbytes_per_sec": 0, 00:10:11.875 "r_mbytes_per_sec": 0, 00:10:11.876 "w_mbytes_per_sec": 0 00:10:11.876 }, 00:10:11.876 "claimed": false, 00:10:11.876 "zoned": false, 00:10:11.876 "supported_io_types": { 00:10:11.876 "read": true, 00:10:11.876 "write": true, 00:10:11.876 "unmap": true, 00:10:11.876 "flush": false, 00:10:11.876 "reset": true, 00:10:11.876 "nvme_admin": false, 00:10:11.876 "nvme_io": false, 00:10:11.876 "nvme_io_md": false, 00:10:11.876 "write_zeroes": true, 00:10:11.876 "zcopy": false, 00:10:11.876 "get_zone_info": false, 00:10:11.876 "zone_management": false, 00:10:11.876 "zone_append": false, 00:10:11.876 "compare": false, 00:10:11.876 "compare_and_write": false, 00:10:11.876 "abort": false, 00:10:11.876 "seek_hole": true, 00:10:11.876 "seek_data": true, 00:10:11.876 "copy": false, 00:10:11.876 "nvme_iov_md": false 00:10:11.876 }, 00:10:11.876 "driver_specific": { 00:10:11.876 "lvol": { 00:10:11.876 "lvol_store_uuid": "18fb3dba-5f22-46b1-ac5f-63cae8a53d2e", 00:10:11.876 "base_bdev": "aio_bdev", 00:10:11.876 "thin_provision": false, 00:10:11.876 "num_allocated_clusters": 38, 00:10:11.876 "snapshot": false, 00:10:11.876 "clone": false, 00:10:11.876 "esnap_clone": false 00:10:11.876 } 00:10:11.876 } 00:10:11.876 } 00:10:11.876 ] 00:10:11.876 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:11.876 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:11.876 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:12.139 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:12.139 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:12.139 17:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:12.139 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:12.139 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 149973c6-f768-4429-afa1-4ce596b45fe1 00:10:12.399 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18fb3dba-5f22-46b1-ac5f-63cae8a53d2e 00:10:12.660 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:12.660 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:12.660 00:10:12.660 real 0m17.616s 00:10:12.660 user 0m46.297s 00:10:12.660 sys 0m2.926s 00:10:12.660 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.660 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:12.660 ************************************ 00:10:12.660 END TEST lvs_grow_dirty 00:10:12.660 ************************************ 00:10:12.919 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:12.919 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:12.920 nvmf_trace.0 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.920 rmmod nvme_tcp 00:10:12.920 rmmod nvme_fabrics 00:10:12.920 rmmod nvme_keyring 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1536447 ']' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1536447 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1536447 ']' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1536447 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536447 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536447' 00:10:12.920 killing process with pid 1536447 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1536447 00:10:12.920 17:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1536447 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.179 17:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.085 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.085 00:10:15.085 real 0m45.084s 00:10:15.085 user 1m8.450s 00:10:15.085 sys 0m10.506s 00:10:15.085 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.085 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:15.085 ************************************ 00:10:15.085 END TEST nvmf_lvs_grow 00:10:15.085 ************************************ 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 ************************************ 00:10:15.347 START TEST nvmf_bdev_io_wait 00:10:15.347 ************************************ 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:15.347 * Looking for test storage... 00:10:15.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.347 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.608 --rc genhtml_branch_coverage=1 00:10:15.608 --rc genhtml_function_coverage=1 00:10:15.608 --rc genhtml_legend=1 00:10:15.608 --rc geninfo_all_blocks=1 00:10:15.608 --rc geninfo_unexecuted_blocks=1 00:10:15.608 00:10:15.608 ' 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.608 --rc genhtml_branch_coverage=1 00:10:15.608 --rc genhtml_function_coverage=1 00:10:15.608 --rc genhtml_legend=1 00:10:15.608 --rc geninfo_all_blocks=1 00:10:15.608 --rc geninfo_unexecuted_blocks=1 00:10:15.608 00:10:15.608 ' 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.608 --rc genhtml_branch_coverage=1 00:10:15.608 --rc genhtml_function_coverage=1 00:10:15.608 --rc genhtml_legend=1 00:10:15.608 --rc geninfo_all_blocks=1 00:10:15.608 --rc geninfo_unexecuted_blocks=1 00:10:15.608 00:10:15.608 ' 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.608 --rc genhtml_branch_coverage=1 00:10:15.608 --rc genhtml_function_coverage=1 00:10:15.608 --rc genhtml_legend=1 00:10:15.608 --rc geninfo_all_blocks=1 00:10:15.608 --rc geninfo_unexecuted_blocks=1 00:10:15.608 00:10:15.608 ' 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.608 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.609 17:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:23.781 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.781 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:23.781 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:23.782 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:23.782 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:10:23.782 00:10:23.782 --- 10.0.0.2 ping statistics --- 00:10:23.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.782 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:10:23.782 00:10:23.782 --- 10.0.0.1 ping statistics --- 00:10:23.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.782 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1541477 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1541477 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1541477 ']' 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.782 17:26:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:23.782 [2024-12-06 17:26:15.015257] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:23.782 [2024-12-06 17:26:15.015318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.782 [2024-12-06 17:26:15.121229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.782 [2024-12-06 17:26:15.175668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.782 [2024-12-06 17:26:15.175727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.782 [2024-12-06 17:26:15.175735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.782 [2024-12-06 17:26:15.175743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.782 [2024-12-06 17:26:15.175750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.782 [2024-12-06 17:26:15.178126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.782 [2024-12-06 17:26:15.178294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.782 [2024-12-06 17:26:15.178458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.782 [2024-12-06 17:26:15.178459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.782 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.782 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 [2024-12-06 17:26:15.965616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 Malloc0 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.044 17:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:24.044 [2024-12-06 17:26:16.031136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1541704 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1541706 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.044 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.044 { 00:10:24.044 "params": { 00:10:24.044 "name": "Nvme$subsystem", 00:10:24.044 "trtype": "$TEST_TRANSPORT", 00:10:24.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.044 "adrfam": "ipv4", 00:10:24.044 "trsvcid": "$NVMF_PORT", 00:10:24.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.044 "hdgst": ${hdgst:-false}, 00:10:24.044 "ddgst": ${ddgst:-false} 00:10:24.044 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 } 00:10:24.045 EOF 00:10:24.045 )") 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1541708 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.045 { 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme$subsystem", 00:10:24.045 "trtype": "$TEST_TRANSPORT", 00:10:24.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "$NVMF_PORT", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.045 "hdgst": ${hdgst:-false}, 00:10:24.045 "ddgst": ${ddgst:-false} 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 } 00:10:24.045 EOF 00:10:24.045 )") 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1541711 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.045 { 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme$subsystem", 00:10:24.045 "trtype": "$TEST_TRANSPORT", 00:10:24.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "$NVMF_PORT", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.045 "hdgst": ${hdgst:-false}, 00:10:24.045 "ddgst": ${ddgst:-false} 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 } 00:10:24.045 EOF 00:10:24.045 )") 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.045 { 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme$subsystem", 00:10:24.045 "trtype": "$TEST_TRANSPORT", 00:10:24.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "$NVMF_PORT", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.045 "hdgst": ${hdgst:-false}, 00:10:24.045 "ddgst": ${ddgst:-false} 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 } 00:10:24.045 EOF 00:10:24.045 )") 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1541704 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme1", 00:10:24.045 "trtype": "tcp", 00:10:24.045 "traddr": "10.0.0.2", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "4420", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.045 "hdgst": false, 00:10:24.045 "ddgst": false 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 }' 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme1", 00:10:24.045 "trtype": "tcp", 00:10:24.045 "traddr": "10.0.0.2", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "4420", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.045 "hdgst": false, 00:10:24.045 "ddgst": false 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 }' 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme1", 00:10:24.045 "trtype": "tcp", 00:10:24.045 "traddr": "10.0.0.2", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "4420", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.045 "hdgst": false, 00:10:24.045 "ddgst": false 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 }' 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:24.045 17:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.045 "params": { 00:10:24.045 "name": "Nvme1", 00:10:24.045 "trtype": "tcp", 00:10:24.045 "traddr": "10.0.0.2", 00:10:24.045 "adrfam": "ipv4", 00:10:24.045 "trsvcid": "4420", 00:10:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.045 "hdgst": false, 00:10:24.045 "ddgst": false 00:10:24.045 }, 00:10:24.045 "method": "bdev_nvme_attach_controller" 00:10:24.045 }' 00:10:24.045 [2024-12-06 17:26:16.091672] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:24.045 [2024-12-06 17:26:16.091746] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:24.045 [2024-12-06 17:26:16.092410] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:24.045 [2024-12-06 17:26:16.092476] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:24.045 [2024-12-06 17:26:16.092525] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:24.045 [2024-12-06 17:26:16.092579] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:24.045 [2024-12-06 17:26:16.095555] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:24.045 [2024-12-06 17:26:16.095623] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:24.307 [2024-12-06 17:26:16.309579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.307 [2024-12-06 17:26:16.349128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:24.569 [2024-12-06 17:26:16.402744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.569 [2024-12-06 17:26:16.443098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:24.569 [2024-12-06 17:26:16.501732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.569 [2024-12-06 17:26:16.543661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:24.569 [2024-12-06 17:26:16.569740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.569 [2024-12-06 17:26:16.607272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:24.831 Running I/O for 1 seconds... 00:10:24.831 Running I/O for 1 seconds... 00:10:24.831 Running I/O for 1 seconds... 00:10:24.831 Running I/O for 1 seconds... 00:10:25.774 14612.00 IOPS, 57.08 MiB/s 00:10:25.774 Latency(us) 00:10:25.774 [2024-12-06T16:26:17.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.774 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:25.774 Nvme1n1 : 1.01 14668.86 57.30 0.00 0.00 8698.78 4587.52 17039.36 00:10:25.774 [2024-12-06T16:26:17.840Z] =================================================================================================================== 00:10:25.774 [2024-12-06T16:26:17.840Z] Total : 14668.86 57.30 0.00 0.00 8698.78 4587.52 17039.36 00:10:25.774 6176.00 IOPS, 24.12 MiB/s 00:10:25.774 Latency(us) 00:10:25.774 [2024-12-06T16:26:17.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.774 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:25.774 Nvme1n1 : 1.02 6205.97 24.24 0.00 0.00 20451.82 5488.64 29709.65 00:10:25.774 [2024-12-06T16:26:17.840Z] =================================================================================================================== 00:10:25.774 [2024-12-06T16:26:17.840Z] Total : 6205.97 24.24 0.00 0.00 20451.82 5488.64 29709.65 00:10:25.774 179888.00 IOPS, 702.69 MiB/s 00:10:25.774 Latency(us) 00:10:25.774 [2024-12-06T16:26:17.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.774 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:25.774 Nvme1n1 : 1.00 179530.83 701.29 0.00 0.00 708.99 302.08 1979.73 00:10:25.774 [2024-12-06T16:26:17.840Z] =================================================================================================================== 00:10:25.774 [2024-12-06T16:26:17.840Z] Total : 179530.83 701.29 0.00 0.00 708.99 302.08 1979.73 00:10:26.035 17:26:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1541706 00:10:26.035 6306.00 IOPS, 24.63 MiB/s 00:10:26.035 Latency(us) 00:10:26.035 [2024-12-06T16:26:18.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.035 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:26.035 Nvme1n1 : 1.01 6392.18 24.97 0.00 0.00 19949.77 5379.41 46530.56 00:10:26.035 [2024-12-06T16:26:18.101Z] =================================================================================================================== 00:10:26.035 [2024-12-06T16:26:18.101Z] Total : 6392.18 24.97 0.00 0.00 19949.77 5379.41 46530.56 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1541708 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1541711 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.035 rmmod nvme_tcp 00:10:26.035 rmmod nvme_fabrics 00:10:26.035 rmmod nvme_keyring 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1541477 ']' 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1541477 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1541477 ']' 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1541477 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.035 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541477 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541477' 00:10:26.297 killing process with pid 1541477 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1541477 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1541477 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.297 17:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.844 00:10:28.844 real 0m13.182s 00:10:28.844 user 0m19.847s 00:10:28.844 sys 0m7.567s 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.844 ************************************ 00:10:28.844 END TEST nvmf_bdev_io_wait 00:10:28.844 ************************************ 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.844 ************************************ 00:10:28.844 START TEST nvmf_queue_depth 00:10:28.844 ************************************ 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:28.844 * Looking for test storage... 00:10:28.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.844 --rc genhtml_branch_coverage=1 00:10:28.844 --rc genhtml_function_coverage=1 00:10:28.844 --rc genhtml_legend=1 00:10:28.844 --rc geninfo_all_blocks=1 00:10:28.844 --rc geninfo_unexecuted_blocks=1 00:10:28.844 00:10:28.844 ' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.844 --rc genhtml_branch_coverage=1 00:10:28.844 --rc genhtml_function_coverage=1 00:10:28.844 --rc genhtml_legend=1 00:10:28.844 --rc geninfo_all_blocks=1 00:10:28.844 --rc geninfo_unexecuted_blocks=1 00:10:28.844 00:10:28.844 ' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.844 --rc genhtml_branch_coverage=1 00:10:28.844 --rc genhtml_function_coverage=1 00:10:28.844 --rc genhtml_legend=1 00:10:28.844 --rc geninfo_all_blocks=1 00:10:28.844 --rc geninfo_unexecuted_blocks=1 00:10:28.844 00:10:28.844 ' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.844 --rc genhtml_branch_coverage=1 00:10:28.844 --rc genhtml_function_coverage=1 00:10:28.844 --rc genhtml_legend=1 00:10:28.844 --rc geninfo_all_blocks=1 00:10:28.844 --rc geninfo_unexecuted_blocks=1 00:10:28.844 00:10:28.844 ' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.844 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.845 17:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:36.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.980 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:36.981 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:36.981 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:36.981 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.981 17:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:10:36.981 00:10:36.981 --- 10.0.0.2 ping statistics --- 00:10:36.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.981 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:10:36.981 00:10:36.981 --- 10.0.0.1 ping statistics --- 00:10:36.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.981 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1546410 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1546410 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1546410 ']' 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.981 17:26:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:36.981 [2024-12-06 17:26:28.299609] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:36.981 [2024-12-06 17:26:28.299682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.981 [2024-12-06 17:26:28.402962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.981 [2024-12-06 17:26:28.453143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.981 [2024-12-06 17:26:28.453196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.981 [2024-12-06 17:26:28.453205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.981 [2024-12-06 17:26:28.453212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.981 [2024-12-06 17:26:28.453223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.981 [2024-12-06 17:26:28.454005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.243 [2024-12-06 17:26:29.181625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.243 Malloc0 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.243 [2024-12-06 17:26:29.243259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1546660 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1546660 /var/tmp/bdevperf.sock 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1546660 ']' 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.243 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:37.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:37.244 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.244 17:26:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.244 [2024-12-06 17:26:29.303148] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:10:37.244 [2024-12-06 17:26:29.303215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546660 ] 00:10:37.505 [2024-12-06 17:26:29.396399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.505 [2024-12-06 17:26:29.450304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.077 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.077 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:38.077 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:38.077 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.077 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:38.338 NVMe0n1 00:10:38.338 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.338 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:38.338 Running I/O for 10 seconds... 00:10:40.294 9848.00 IOPS, 38.47 MiB/s [2024-12-06T16:26:33.741Z] 10735.50 IOPS, 41.94 MiB/s [2024-12-06T16:26:34.684Z] 10938.33 IOPS, 42.73 MiB/s [2024-12-06T16:26:35.623Z] 11244.50 IOPS, 43.92 MiB/s [2024-12-06T16:26:36.562Z] 11666.80 IOPS, 45.57 MiB/s [2024-12-06T16:26:37.500Z] 11944.67 IOPS, 46.66 MiB/s [2024-12-06T16:26:38.440Z] 12142.71 IOPS, 47.43 MiB/s [2024-12-06T16:26:39.382Z] 12334.25 IOPS, 48.18 MiB/s [2024-12-06T16:26:40.767Z] 12503.89 IOPS, 48.84 MiB/s [2024-12-06T16:26:40.767Z] 12590.40 IOPS, 49.18 MiB/s 00:10:48.701 Latency(us) 00:10:48.701 [2024-12-06T16:26:40.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.701 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:48.701 Verification LBA range: start 0x0 length 0x4000 00:10:48.701 NVMe0n1 : 10.07 12605.42 49.24 0.00 0.00 80948.93 25777.49 72526.51 00:10:48.701 [2024-12-06T16:26:40.767Z] =================================================================================================================== 00:10:48.701 [2024-12-06T16:26:40.767Z] Total : 12605.42 49.24 0.00 0.00 80948.93 25777.49 72526.51 00:10:48.701 { 00:10:48.701 "results": [ 00:10:48.701 { 00:10:48.701 "job": "NVMe0n1", 00:10:48.701 "core_mask": "0x1", 00:10:48.701 "workload": "verify", 00:10:48.701 "status": "finished", 00:10:48.701 "verify_range": { 00:10:48.701 "start": 0, 00:10:48.701 "length": 16384 00:10:48.701 }, 00:10:48.701 "queue_depth": 1024, 00:10:48.701 "io_size": 4096, 00:10:48.701 "runtime": 10.067575, 00:10:48.701 "iops": 12605.418881905523, 00:10:48.701 "mibps": 49.23991750744345, 00:10:48.701 "io_failed": 0, 00:10:48.701 "io_timeout": 0, 00:10:48.701 "avg_latency_us": 80948.93391286989, 00:10:48.701 "min_latency_us": 25777.493333333332, 00:10:48.701 "max_latency_us": 72526.50666666667 00:10:48.701 } 00:10:48.701 ], 00:10:48.701 "core_count": 1 00:10:48.701 } 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1546660 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1546660 ']' 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1546660 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546660 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546660' 00:10:48.701 killing process with pid 1546660 00:10:48.701 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1546660 00:10:48.702 Received shutdown signal, test time was about 10.000000 seconds 00:10:48.702 00:10:48.702 Latency(us) 00:10:48.702 [2024-12-06T16:26:40.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.702 [2024-12-06T16:26:40.768Z] =================================================================================================================== 00:10:48.702 [2024-12-06T16:26:40.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1546660 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.702 rmmod nvme_tcp 00:10:48.702 rmmod nvme_fabrics 00:10:48.702 rmmod nvme_keyring 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1546410 ']' 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1546410 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1546410 ']' 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1546410 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546410 00:10:48.702 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546410' 00:10:48.971 killing process with pid 1546410 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1546410 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1546410 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.514 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.514 00:10:51.514 real 0m22.482s 00:10:51.514 user 0m25.789s 00:10:51.514 sys 0m7.060s 00:10:51.514 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.514 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.514 ************************************ 00:10:51.514 END TEST nvmf_queue_depth 00:10:51.514 ************************************ 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.514 ************************************ 00:10:51.514 START TEST nvmf_target_multipath 00:10:51.514 ************************************ 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:51.514 * Looking for test storage... 00:10:51.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.514 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.515 --rc genhtml_branch_coverage=1 00:10:51.515 --rc genhtml_function_coverage=1 00:10:51.515 --rc genhtml_legend=1 00:10:51.515 --rc geninfo_all_blocks=1 00:10:51.515 --rc geninfo_unexecuted_blocks=1 00:10:51.515 00:10:51.515 ' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.515 --rc genhtml_branch_coverage=1 00:10:51.515 --rc genhtml_function_coverage=1 00:10:51.515 --rc genhtml_legend=1 00:10:51.515 --rc geninfo_all_blocks=1 00:10:51.515 --rc geninfo_unexecuted_blocks=1 00:10:51.515 00:10:51.515 ' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.515 --rc genhtml_branch_coverage=1 00:10:51.515 --rc genhtml_function_coverage=1 00:10:51.515 --rc genhtml_legend=1 00:10:51.515 --rc geninfo_all_blocks=1 00:10:51.515 --rc geninfo_unexecuted_blocks=1 00:10:51.515 00:10:51.515 ' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.515 --rc genhtml_branch_coverage=1 00:10:51.515 --rc genhtml_function_coverage=1 00:10:51.515 --rc genhtml_legend=1 00:10:51.515 --rc geninfo_all_blocks=1 00:10:51.515 --rc geninfo_unexecuted_blocks=1 00:10:51.515 00:10:51.515 ' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.515 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:59.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:59.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:59.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:59.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.730 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:10:59.731 00:10:59.731 --- 10.0.0.2 ping statistics --- 00:10:59.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.731 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:10:59.731 00:10:59.731 --- 10.0.0.1 ping statistics --- 00:10:59.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.731 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:59.731 only one NIC for nvmf test 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.731 rmmod nvme_tcp 00:10:59.731 rmmod nvme_fabrics 00:10:59.731 rmmod nvme_keyring 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.731 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.129 00:11:01.129 real 0m9.931s 00:11:01.129 user 0m2.220s 00:11:01.129 sys 0m5.674s 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.129 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:01.129 ************************************ 00:11:01.129 END TEST nvmf_target_multipath 00:11:01.129 ************************************ 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.129 ************************************ 00:11:01.129 START TEST nvmf_zcopy 00:11:01.129 ************************************ 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:01.129 * Looking for test storage... 00:11:01.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.129 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.392 --rc genhtml_branch_coverage=1 00:11:01.392 --rc genhtml_function_coverage=1 00:11:01.392 --rc genhtml_legend=1 00:11:01.392 --rc geninfo_all_blocks=1 00:11:01.392 --rc geninfo_unexecuted_blocks=1 00:11:01.392 00:11:01.392 ' 00:11:01.392 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.392 --rc genhtml_branch_coverage=1 00:11:01.392 --rc genhtml_function_coverage=1 00:11:01.392 --rc genhtml_legend=1 00:11:01.393 --rc geninfo_all_blocks=1 00:11:01.393 --rc geninfo_unexecuted_blocks=1 00:11:01.393 00:11:01.393 ' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.393 --rc genhtml_branch_coverage=1 00:11:01.393 --rc genhtml_function_coverage=1 00:11:01.393 --rc genhtml_legend=1 00:11:01.393 --rc geninfo_all_blocks=1 00:11:01.393 --rc geninfo_unexecuted_blocks=1 00:11:01.393 00:11:01.393 ' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.393 --rc genhtml_branch_coverage=1 00:11:01.393 --rc genhtml_function_coverage=1 00:11:01.393 --rc genhtml_legend=1 00:11:01.393 --rc geninfo_all_blocks=1 00:11:01.393 --rc geninfo_unexecuted_blocks=1 00:11:01.393 00:11:01.393 ' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.393 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:09.536 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.536 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:09.537 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:09.537 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:09.537 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:11:09.537 00:11:09.537 --- 10.0.0.2 ping statistics --- 00:11:09.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.537 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:09.537 00:11:09.537 --- 10.0.0.1 ping statistics --- 00:11:09.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.537 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1557417 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1557417 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1557417 ']' 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.537 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.537 [2024-12-06 17:27:00.878440] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:11:09.537 [2024-12-06 17:27:00.878510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.537 [2024-12-06 17:27:00.978089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.537 [2024-12-06 17:27:01.028247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.537 [2024-12-06 17:27:01.028308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.537 [2024-12-06 17:27:01.028317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.537 [2024-12-06 17:27:01.028324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.537 [2024-12-06 17:27:01.028330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.537 [2024-12-06 17:27:01.029093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 [2024-12-06 17:27:01.757974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 [2024-12-06 17:27:01.782264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 malloc0 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.797 { 00:11:09.797 "params": { 00:11:09.797 "name": "Nvme$subsystem", 00:11:09.797 "trtype": "$TEST_TRANSPORT", 00:11:09.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.797 "adrfam": "ipv4", 00:11:09.797 "trsvcid": "$NVMF_PORT", 00:11:09.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.797 "hdgst": ${hdgst:-false}, 00:11:09.797 "ddgst": ${ddgst:-false} 00:11:09.797 }, 00:11:09.797 "method": "bdev_nvme_attach_controller" 00:11:09.797 } 00:11:09.797 EOF 00:11:09.797 )") 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:09.797 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.797 "params": { 00:11:09.797 "name": "Nvme1", 00:11:09.797 "trtype": "tcp", 00:11:09.797 "traddr": "10.0.0.2", 00:11:09.797 "adrfam": "ipv4", 00:11:09.797 "trsvcid": "4420", 00:11:09.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.797 "hdgst": false, 00:11:09.797 "ddgst": false 00:11:09.797 }, 00:11:09.797 "method": "bdev_nvme_attach_controller" 00:11:09.797 }' 00:11:10.056 [2024-12-06 17:27:01.881763] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:11:10.056 [2024-12-06 17:27:01.881834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557583 ] 00:11:10.056 [2024-12-06 17:27:01.977002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.056 [2024-12-06 17:27:02.030133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.316 Running I/O for 10 seconds... 00:11:12.215 6459.00 IOPS, 50.46 MiB/s [2024-12-06T16:27:05.664Z] 6691.00 IOPS, 52.27 MiB/s [2024-12-06T16:27:06.601Z] 7703.67 IOPS, 60.18 MiB/s [2024-12-06T16:27:07.540Z] 8227.75 IOPS, 64.28 MiB/s [2024-12-06T16:27:08.479Z] 8547.00 IOPS, 66.77 MiB/s [2024-12-06T16:27:09.423Z] 8760.00 IOPS, 68.44 MiB/s [2024-12-06T16:27:10.364Z] 8909.71 IOPS, 69.61 MiB/s [2024-12-06T16:27:11.307Z] 9017.88 IOPS, 70.45 MiB/s [2024-12-06T16:27:12.693Z] 9107.56 IOPS, 71.15 MiB/s [2024-12-06T16:27:12.693Z] 9175.50 IOPS, 71.68 MiB/s 00:11:20.627 Latency(us) 00:11:20.627 [2024-12-06T16:27:12.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.627 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:20.627 Verification LBA range: start 0x0 length 0x1000 00:11:20.627 Nvme1n1 : 10.01 9179.11 71.71 0.00 0.00 13897.37 2143.57 27634.35 00:11:20.627 [2024-12-06T16:27:12.693Z] =================================================================================================================== 00:11:20.627 [2024-12-06T16:27:12.693Z] Total : 9179.11 71.71 0.00 0.00 13897.37 2143.57 27634.35 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1560075 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:20.627 { 00:11:20.627 "params": { 00:11:20.627 "name": "Nvme$subsystem", 00:11:20.627 "trtype": "$TEST_TRANSPORT", 00:11:20.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:20.627 "adrfam": "ipv4", 00:11:20.627 "trsvcid": "$NVMF_PORT", 00:11:20.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:20.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:20.627 "hdgst": ${hdgst:-false}, 00:11:20.627 "ddgst": ${ddgst:-false} 00:11:20.627 }, 00:11:20.627 "method": "bdev_nvme_attach_controller" 00:11:20.627 } 00:11:20.627 EOF 00:11:20.627 )") 00:11:20.627 [2024-12-06 17:27:12.384301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.384333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:20.627 [2024-12-06 17:27:12.392286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.392295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:20.627 17:27:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:20.627 "params": { 00:11:20.627 "name": "Nvme1", 00:11:20.627 "trtype": "tcp", 00:11:20.627 "traddr": "10.0.0.2", 00:11:20.627 "adrfam": "ipv4", 00:11:20.627 "trsvcid": "4420", 00:11:20.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:20.627 "hdgst": false, 00:11:20.627 "ddgst": false 00:11:20.627 }, 00:11:20.627 "method": "bdev_nvme_attach_controller" 00:11:20.627 }' 00:11:20.627 [2024-12-06 17:27:12.400304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.400312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.408325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.408333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.420355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.420363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.427954] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:11:20.627 [2024-12-06 17:27:12.428001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560075 ] 00:11:20.627 [2024-12-06 17:27:12.432385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.432392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.444416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.444423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.456448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.456455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.464469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.464476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.472490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.472498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.480511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.480518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.488532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.488540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.500564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.500572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.509228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.627 [2024-12-06 17:27:12.512594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.512602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.524626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.524635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.627 [2024-12-06 17:27:12.536659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.627 [2024-12-06 17:27:12.536669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.538431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.628 [2024-12-06 17:27:12.548693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.548701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.560725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.560736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.572753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.572766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.584781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.584791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.596813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.596820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.608858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.608875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.620879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.620889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.632908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.632917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.644938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.644946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.656967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.656975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.668998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.669006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.628 [2024-12-06 17:27:12.681031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.628 [2024-12-06 17:27:12.681040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.889 [2024-12-06 17:27:12.693063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.889 [2024-12-06 17:27:12.693073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.889 [2024-12-06 17:27:12.705101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.889 [2024-12-06 17:27:12.705115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.889 [2024-12-06 17:27:12.717132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.889 [2024-12-06 17:27:12.717145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.889 Running I/O for 5 seconds... 00:11:20.890 [2024-12-06 17:27:12.732852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.732869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.745536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.745552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.758422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.758439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.772132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.772148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.784889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.784905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.798772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.798790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.812151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.812166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.825729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.825744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.839301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.839317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.852736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.852751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.866089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.866105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.879617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.879633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.892507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.892522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.905076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.905092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.918371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.918387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.931069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.931084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.890 [2024-12-06 17:27:12.944786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.890 [2024-12-06 17:27:12.944801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:12.958315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:12.958330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:12.971684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:12.971699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:12.985065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:12.985080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:12.998476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:12.998491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.011742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.011756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.024576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.024590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.037745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.037760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.050296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.050310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.063646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.063660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.076270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.076284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.089164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.089178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.102411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.102426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.115769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.115784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.128896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.128910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.142220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.142234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.155900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.155914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.169085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.169099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.182301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.182315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.196080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.196098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.152 [2024-12-06 17:27:13.209018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.152 [2024-12-06 17:27:13.209033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.222110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.222125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.235468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.235482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.248982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.248997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.262522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.262536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.275183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.275197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.287557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.287571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.300431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.300446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.313292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.313307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.326647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.326661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.340356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.340370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.353810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.353825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.367142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.367157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.379964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.379979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.393060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.393075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.406484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.406499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.419820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.419834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.433086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.433100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.446577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.446599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.460000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.460015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.415 [2024-12-06 17:27:13.473401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.415 [2024-12-06 17:27:13.473416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.486741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.486756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.499921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.499936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.512647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.512662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.525153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.525168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.538048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.538063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.551217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.551232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.564810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.564825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.578730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.578745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.591857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.591872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.604259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.604274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.617060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.617075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.630542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.630556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.643233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.643247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.656412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.656427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.669009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.669023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.682164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.682179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.695060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.695078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.707824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.707838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 [2024-12-06 17:27:13.721259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.721274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.677 19139.00 IOPS, 149.52 MiB/s [2024-12-06T16:27:13.743Z] [2024-12-06 17:27:13.734746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.677 [2024-12-06 17:27:13.734761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.747602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.747617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.760834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.760849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.773982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.773997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.786328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.786342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.799654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.799669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.812970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.812985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.826381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.826395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.839645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.839659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.851871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.851885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.864442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.864457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.877100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.877115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.890595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.890609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.939 [2024-12-06 17:27:13.903708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.939 [2024-12-06 17:27:13.903723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.917069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.917084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.930468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.930483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.943862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.943877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.957551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.957566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.970374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.970389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.982953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.982968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.940 [2024-12-06 17:27:13.996646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.940 [2024-12-06 17:27:13.996660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.202 [2024-12-06 17:27:14.010015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.202 [2024-12-06 17:27:14.010030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.202 [2024-12-06 17:27:14.023474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.202 [2024-12-06 17:27:14.023489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.202 [2024-12-06 17:27:14.036600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.036616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.049224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.049238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.062812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.062827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.076126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.076141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.089458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.089473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.102281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.102296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.114402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.114417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.127199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.127214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.140365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.140380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.153402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.153417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.167034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.167049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.180223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.180238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.193857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.193872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.206871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.206886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.218884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.218899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.232028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.232043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.245551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.245565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.203 [2024-12-06 17:27:14.258702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.203 [2024-12-06 17:27:14.258717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.272224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.272239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.284939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.284954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.298045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.298060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.311414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.311429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.324403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.324418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.338001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.338015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.350981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.350996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.363622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.464 [2024-12-06 17:27:14.363641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.464 [2024-12-06 17:27:14.376228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.376243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.388815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.388830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.401391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.401405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.414688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.414703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.428046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.428061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.440619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.440633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.453566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.453582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.466939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.466957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.479571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.479586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.492797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.492812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.506304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.506319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.465 [2024-12-06 17:27:14.519348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.465 [2024-12-06 17:27:14.519363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.532602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.532617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.545844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.545859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.559318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.559333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.572879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.572894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.585308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.585323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.598619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.598634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.612124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.612139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.625382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.625398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.637970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.637985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.651615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.651630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.663842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.663857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.677054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.677069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.689971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.689986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.703084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.703098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.715936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.715950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 19268.00 IOPS, 150.53 MiB/s [2024-12-06T16:27:14.792Z] [2024-12-06 17:27:14.728891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.728906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.741822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.741837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.754716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.754730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.767310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.767325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.726 [2024-12-06 17:27:14.780290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.726 [2024-12-06 17:27:14.780305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.987 [2024-12-06 17:27:14.793101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.793115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.806629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.806648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.819167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.819183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.832080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.832095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.845618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.845632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.858533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.858548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.871618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.871633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.885110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.885125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.898603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.898618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.911362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.911376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.924737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.924755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.937556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.937570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.951097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.951112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.964196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.964211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.977465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.977480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:14.991009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:14.991024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:15.004398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:15.004412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:15.017852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:15.017866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:15.031502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:15.031517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.988 [2024-12-06 17:27:15.044805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.988 [2024-12-06 17:27:15.044819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.057773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.057787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.070754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.070769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.083800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.083814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.097373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.097387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.110636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.110656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.124132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.124146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.137416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.137430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.151056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.151071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.164373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.164388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.177446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.177465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.190691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.190705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.203446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.203460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.216083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.216097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.228675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.228689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.241733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.241748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.254844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.254859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.267617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.267631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.280713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.280728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.293942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.293957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.251 [2024-12-06 17:27:15.307360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.251 [2024-12-06 17:27:15.307374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.320136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.320151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.332789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.332803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.346058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.346073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.359197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.359212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.372184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.372198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.385088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.385102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.398194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.398209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.511 [2024-12-06 17:27:15.411208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.511 [2024-12-06 17:27:15.411222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.424541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.424559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.438061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.438075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.451443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.451458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.464653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.464667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.477805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.477819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.491309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.491323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.504734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.504749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.518154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.518169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.531482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.531497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.544505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.544519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.557984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.557999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.512 [2024-12-06 17:27:15.571456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.512 [2024-12-06 17:27:15.571470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.584862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.584877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.598201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.598216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.610975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.610990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.624629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.624648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.637028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.637042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.649781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.649795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.662884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.662898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.676330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.676345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.688694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.688709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.701723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.701737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.715183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.715197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 19312.67 IOPS, 150.88 MiB/s [2024-12-06T16:27:15.838Z] [2024-12-06 17:27:15.728443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.728457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.741572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.741587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.754880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.754894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.767942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.767957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.781292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.781308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.794135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.794149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.807452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.807467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.820983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.820998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.772 [2024-12-06 17:27:15.833667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.772 [2024-12-06 17:27:15.833681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.847019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.847034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.860559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.860574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.873929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.873944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.887914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.887928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.900683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.900697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.913895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.913910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.927592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.927607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.940186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.940200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.953446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.953461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.967263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.967278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.979861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.979875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:15.992530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:15.992545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.005480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.005495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.018650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.018665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.032226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.032241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.045217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.045232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.058696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.058710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.071574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.071589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.084308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.084322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.033 [2024-12-06 17:27:16.097132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.033 [2024-12-06 17:27:16.097147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.110362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.110377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.123127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.123142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.136590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.136607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.149981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.149996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.162166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.162185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.175899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.175914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.188937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.188952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.201336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.201351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.214736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.214751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.227703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.227718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.240820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.240834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.253940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.253955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.294 [2024-12-06 17:27:16.266892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.294 [2024-12-06 17:27:16.266906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.295 [2024-12-06 17:27:16.280135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.295 [2024-12-06 17:27:16.280149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.295 [2024-12-06 17:27:16.293226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.295 [2024-12-06 17:27:16.293241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.295 [2024-12-06 17:27:16.306158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.295 [2024-12-06 17:27:16.306172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.295 [2024-12-06 17:27:16.319577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.295 [2024-12-06 17:27:16.319591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.295 [2024-12-06 17:27:16.333026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.295 [2024-12-06 17:27:16.333040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.295 [2024-12-06 17:27:16.345961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.295 [2024-12-06 17:27:16.345976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.359626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.359648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.372797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.372811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.386145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.386160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.399620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.399635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.413261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.413280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.426736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.426752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.439768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.439783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.452570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.452584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.465454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.465468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.478161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.478176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.491448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.491463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.504280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.504295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.517103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.517117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.530377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.530391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.543492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.543506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.557090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.557104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.570682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.570697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.584091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.584105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.597071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.597086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.556 [2024-12-06 17:27:16.609373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.556 [2024-12-06 17:27:16.609388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.622497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.622512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.635679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.635694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.649392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.649407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.662884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.662903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.675979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.675993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.689023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.689038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.702090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.702104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.715468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.715483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.728686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.728699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 19318.25 IOPS, 150.92 MiB/s [2024-12-06T16:27:16.883Z] [2024-12-06 17:27:16.742104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.742119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.755119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.755133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.768701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.768715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.781960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.781975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.795395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.795409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.808752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.808766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.821861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.821876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.835166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.835180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.848396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.848411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.861708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.861722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.817 [2024-12-06 17:27:16.875408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.817 [2024-12-06 17:27:16.875422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.887983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.887998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.901468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.901482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.914227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.914241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.927595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.927610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.941041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.941056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.954418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.954433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.967544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.967559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.980785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.980800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:16.994204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:16.994218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.006627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.006646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.020417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.020431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.032962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.032977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.046167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.046182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.059444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.059459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.072070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.072085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.084496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.084510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.097537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.097551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.110196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.110211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.122746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.122761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.078 [2024-12-06 17:27:17.135881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.078 [2024-12-06 17:27:17.135895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.148644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.148659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.161951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.161966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.175092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.175107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.188655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.188670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.201981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.201996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.215187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.215202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.228546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.228560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.241064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.241078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.253570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.338 [2024-12-06 17:27:17.253584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.338 [2024-12-06 17:27:17.266584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.266598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.279399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.279413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.292657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.292671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.305457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.305472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.318131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.318145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.331388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.331403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.344028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.344042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.356745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.356760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.369424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.369438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.382577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.382591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.339 [2024-12-06 17:27:17.395896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.339 [2024-12-06 17:27:17.395910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.408173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.408188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.421768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.421782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.435121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.435135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.448520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.448535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.461559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.461574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.474977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.474992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.488339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.488354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.502152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.502167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.515925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.515940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.528435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.528450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.541255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.541269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.554934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.554949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.567506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.567520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.599 [2024-12-06 17:27:17.580890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.599 [2024-12-06 17:27:17.580905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.600 [2024-12-06 17:27:17.593462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.600 [2024-12-06 17:27:17.593477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.600 [2024-12-06 17:27:17.605895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.600 [2024-12-06 17:27:17.605910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.600 [2024-12-06 17:27:17.618400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.600 [2024-12-06 17:27:17.618415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.600 [2024-12-06 17:27:17.630358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.600 [2024-12-06 17:27:17.630373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.600 [2024-12-06 17:27:17.643027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.600 [2024-12-06 17:27:17.643042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.600 [2024-12-06 17:27:17.656450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.600 [2024-12-06 17:27:17.656464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.668572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.668587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.681957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.681972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.695191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.695206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.708479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.708493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.720868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.720884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 19310.60 IOPS, 150.86 MiB/s [2024-12-06T16:27:17.927Z] [2024-12-06 17:27:17.733155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.733169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 00:11:25.861 Latency(us) 00:11:25.861 [2024-12-06T16:27:17.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.861 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:25.861 Nvme1n1 : 5.01 19317.27 150.92 0.00 0.00 6620.55 2566.83 18022.40 00:11:25.861 [2024-12-06T16:27:17.927Z] =================================================================================================================== 00:11:25.861 [2024-12-06T16:27:17.927Z] Total : 19317.27 150.92 0.00 0.00 6620.55 2566.83 18022.40 00:11:25.861 [2024-12-06 17:27:17.742760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.742774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.754795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.754810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.766822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.766834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.778854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.778865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.790880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.790892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.802910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.802920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.814941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.814949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.826973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.826983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 [2024-12-06 17:27:17.839003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.861 [2024-12-06 17:27:17.839016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1560075) - No such process 00:11:25.861 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1560075 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 delay0 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.862 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:26.212 [2024-12-06 17:27:18.055815] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:34.374 Initializing NVMe Controllers 00:11:34.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:34.374 Initialization complete. Launching workers. 00:11:34.374 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 33378 00:11:34.374 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33498, failed to submit 120 00:11:34.374 success 33405, unsuccessful 93, failed 0 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.374 rmmod nvme_tcp 00:11:34.374 rmmod nvme_fabrics 00:11:34.374 rmmod nvme_keyring 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1557417 ']' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1557417 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1557417 ']' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1557417 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557417 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557417' 00:11:34.374 killing process with pid 1557417 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1557417 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1557417 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.374 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.756 00:11:35.756 real 0m34.529s 00:11:35.756 user 0m45.391s 00:11:35.756 sys 0m12.008s 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.756 ************************************ 00:11:35.756 END TEST nvmf_zcopy 00:11:35.756 ************************************ 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.756 ************************************ 00:11:35.756 START TEST nvmf_nmic 00:11:35.756 ************************************ 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:35.756 * Looking for test storage... 00:11:35.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.756 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.018 --rc genhtml_branch_coverage=1 00:11:36.018 --rc genhtml_function_coverage=1 00:11:36.018 --rc genhtml_legend=1 00:11:36.018 --rc geninfo_all_blocks=1 00:11:36.018 --rc geninfo_unexecuted_blocks=1 00:11:36.018 00:11:36.018 ' 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.018 --rc genhtml_branch_coverage=1 00:11:36.018 --rc genhtml_function_coverage=1 00:11:36.018 --rc genhtml_legend=1 00:11:36.018 --rc geninfo_all_blocks=1 00:11:36.018 --rc geninfo_unexecuted_blocks=1 00:11:36.018 00:11:36.018 ' 00:11:36.018 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.018 --rc genhtml_branch_coverage=1 00:11:36.018 --rc genhtml_function_coverage=1 00:11:36.018 --rc genhtml_legend=1 00:11:36.018 --rc geninfo_all_blocks=1 00:11:36.018 --rc geninfo_unexecuted_blocks=1 00:11:36.019 00:11:36.019 ' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.019 --rc genhtml_branch_coverage=1 00:11:36.019 --rc genhtml_function_coverage=1 00:11:36.019 --rc genhtml_legend=1 00:11:36.019 --rc geninfo_all_blocks=1 00:11:36.019 --rc geninfo_unexecuted_blocks=1 00:11:36.019 00:11:36.019 ' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.019 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:44.159 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.159 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:44.160 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:44.160 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:44.160 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:11:44.160 00:11:44.160 --- 10.0.0.2 ping statistics --- 00:11:44.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.160 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:11:44.160 00:11:44.160 --- 10.0.0.1 ping statistics --- 00:11:44.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.160 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1566984 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1566984 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1566984 ']' 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.160 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.160 [2024-12-06 17:27:35.449500] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:11:44.160 [2024-12-06 17:27:35.449569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.160 [2024-12-06 17:27:35.550083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.160 [2024-12-06 17:27:35.606786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.160 [2024-12-06 17:27:35.606847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.160 [2024-12-06 17:27:35.606855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.160 [2024-12-06 17:27:35.606862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.160 [2024-12-06 17:27:35.606868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.160 [2024-12-06 17:27:35.608924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.160 [2024-12-06 17:27:35.609083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.160 [2024-12-06 17:27:35.609222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.160 [2024-12-06 17:27:35.609223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 [2024-12-06 17:27:36.319777] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 Malloc0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 [2024-12-06 17:27:36.386050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:44.421 test case1: single bdev can't be used in multiple subsystems 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.421 [2024-12-06 17:27:36.409827] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:44.421 [2024-12-06 17:27:36.409853] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:44.421 [2024-12-06 17:27:36.409861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.421 request: 00:11:44.421 { 00:11:44.421 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:44.421 "namespace": { 00:11:44.421 "bdev_name": "Malloc0", 00:11:44.421 "no_auto_visible": false, 00:11:44.421 "hide_metadata": false 00:11:44.421 }, 00:11:44.421 "method": "nvmf_subsystem_add_ns", 00:11:44.421 "req_id": 1 00:11:44.421 } 00:11:44.421 Got JSON-RPC error response 00:11:44.421 response: 00:11:44.421 { 00:11:44.421 "code": -32602, 00:11:44.421 "message": "Invalid parameters" 00:11:44.421 } 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:44.421 Adding namespace failed - expected result. 00:11:44.421 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:44.421 test case2: host connect to nvmf target in multiple paths 00:11:44.422 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:44.422 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.422 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 [2024-12-06 17:27:36.422047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:44.422 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.422 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.335 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:47.718 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.718 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.718 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.718 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.718 17:27:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:49.631 17:27:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:49.631 [global] 00:11:49.631 thread=1 00:11:49.631 invalidate=1 00:11:49.631 rw=write 00:11:49.631 time_based=1 00:11:49.631 runtime=1 00:11:49.631 ioengine=libaio 00:11:49.631 direct=1 00:11:49.631 bs=4096 00:11:49.631 iodepth=1 00:11:49.631 norandommap=0 00:11:49.631 numjobs=1 00:11:49.631 00:11:49.631 verify_dump=1 00:11:49.631 verify_backlog=512 00:11:49.631 verify_state_save=0 00:11:49.631 do_verify=1 00:11:49.631 verify=crc32c-intel 00:11:49.631 [job0] 00:11:49.631 filename=/dev/nvme0n1 00:11:49.631 Could not set queue depth (nvme0n1) 00:11:50.198 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.198 fio-3.35 00:11:50.198 Starting 1 thread 00:11:51.139 00:11:51.139 job0: (groupid=0, jobs=1): err= 0: pid=1568339: Fri Dec 6 17:27:43 2024 00:11:51.139 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:51.139 slat (nsec): min=7452, max=61160, avg=26197.18, stdev=3379.64 00:11:51.139 clat (usec): min=585, max=1181, avg=978.46, stdev=70.51 00:11:51.139 lat (usec): min=611, max=1207, avg=1004.66, stdev=70.82 00:11:51.139 clat percentiles (usec): 00:11:51.139 | 1.00th=[ 766], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 930], 00:11:51.139 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:11:51.139 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:11:51.139 | 99.00th=[ 1106], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:11:51.139 | 99.99th=[ 1188] 00:11:51.139 write: IOPS=731, BW=2925KiB/s (2995kB/s)(2928KiB/1001msec); 0 zone resets 00:11:51.139 slat (usec): min=10, max=26756, avg=66.13, stdev=987.92 00:11:51.139 clat (usec): min=230, max=817, avg=582.78, stdev=104.10 00:11:51.139 lat (usec): min=242, max=27413, avg=648.90, stdev=996.55 00:11:51.139 clat percentiles (usec): 00:11:51.139 | 1.00th=[ 322], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 494], 00:11:51.139 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:11:51.139 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 725], 00:11:51.139 | 99.00th=[ 783], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:11:51.139 | 99.99th=[ 816] 00:11:51.139 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:51.139 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:51.139 lat (usec) : 250=0.08%, 500=12.22%, 750=45.34%, 1000=25.48% 00:11:51.139 lat (msec) : 2=16.88% 00:11:51.139 cpu : usr=1.80%, sys=3.70%, ctx=1247, majf=0, minf=1 00:11:51.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.139 issued rwts: total=512,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.139 00:11:51.139 Run status group 0 (all jobs): 00:11:51.139 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:11:51.139 WRITE: bw=2925KiB/s (2995kB/s), 2925KiB/s-2925KiB/s (2995kB/s-2995kB/s), io=2928KiB (2998kB), run=1001-1001msec 00:11:51.139 00:11:51.139 Disk stats (read/write): 00:11:51.139 nvme0n1: ios=537/571, merge=0/0, ticks=1463/319, in_queue=1782, util=98.80% 00:11:51.139 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.399 rmmod nvme_tcp 00:11:51.399 rmmod nvme_fabrics 00:11:51.399 rmmod nvme_keyring 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1566984 ']' 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1566984 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1566984 ']' 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1566984 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566984 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.399 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.400 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566984' 00:11:51.400 killing process with pid 1566984 00:11:51.400 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1566984 00:11:51.400 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1566984 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.660 17:27:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.573 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.573 00:11:53.573 real 0m17.958s 00:11:53.573 user 0m48.928s 00:11:53.573 sys 0m6.571s 00:11:53.573 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.573 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:53.573 ************************************ 00:11:53.573 END TEST nvmf_nmic 00:11:53.573 ************************************ 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.834 ************************************ 00:11:53.834 START TEST nvmf_fio_target 00:11:53.834 ************************************ 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:53.834 * Looking for test storage... 00:11:53.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.834 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:54.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.096 --rc genhtml_branch_coverage=1 00:11:54.096 --rc genhtml_function_coverage=1 00:11:54.096 --rc genhtml_legend=1 00:11:54.096 --rc geninfo_all_blocks=1 00:11:54.096 --rc geninfo_unexecuted_blocks=1 00:11:54.096 00:11:54.096 ' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:54.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.096 --rc genhtml_branch_coverage=1 00:11:54.096 --rc genhtml_function_coverage=1 00:11:54.096 --rc genhtml_legend=1 00:11:54.096 --rc geninfo_all_blocks=1 00:11:54.096 --rc geninfo_unexecuted_blocks=1 00:11:54.096 00:11:54.096 ' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:54.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.096 --rc genhtml_branch_coverage=1 00:11:54.096 --rc genhtml_function_coverage=1 00:11:54.096 --rc genhtml_legend=1 00:11:54.096 --rc geninfo_all_blocks=1 00:11:54.096 --rc geninfo_unexecuted_blocks=1 00:11:54.096 00:11:54.096 ' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:54.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.096 --rc genhtml_branch_coverage=1 00:11:54.096 --rc genhtml_function_coverage=1 00:11:54.096 --rc genhtml_legend=1 00:11:54.096 --rc geninfo_all_blocks=1 00:11:54.096 --rc geninfo_unexecuted_blocks=1 00:11:54.096 00:11:54.096 ' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.096 17:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:02.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:02.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:02.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.262 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:02.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:12:02.263 00:12:02.263 --- 10.0.0.2 ping statistics --- 00:12:02.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.263 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:12:02.263 00:12:02.263 --- 10.0.0.1 ping statistics --- 00:12:02.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.263 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1572970 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1572970 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1572970 ']' 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.263 17:27:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.263 [2024-12-06 17:27:53.444118] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:12:02.263 [2024-12-06 17:27:53.444185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.263 [2024-12-06 17:27:53.529133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.263 [2024-12-06 17:27:53.583736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.263 [2024-12-06 17:27:53.583799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.263 [2024-12-06 17:27:53.583808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.263 [2024-12-06 17:27:53.583815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.263 [2024-12-06 17:27:53.583821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.263 [2024-12-06 17:27:53.586221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.263 [2024-12-06 17:27:53.586380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.263 [2024-12-06 17:27:53.586543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.263 [2024-12-06 17:27:53.586544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.263 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:02.523 [2024-12-06 17:27:54.477410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.523 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.784 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:02.784 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.045 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:03.045 17:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.304 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:03.304 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.565 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:03.565 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:03.565 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.824 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:03.824 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.084 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:04.084 17:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.403 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:04.404 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:04.404 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.663 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:04.663 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.922 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:04.922 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.922 17:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.182 [2024-12-06 17:27:57.068773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.182 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:05.441 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:05.441 17:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.352 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:07.352 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.352 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.352 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:07.352 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:07.352 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:09.267 17:28:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:09.267 [global] 00:12:09.267 thread=1 00:12:09.267 invalidate=1 00:12:09.267 rw=write 00:12:09.267 time_based=1 00:12:09.267 runtime=1 00:12:09.267 ioengine=libaio 00:12:09.267 direct=1 00:12:09.267 bs=4096 00:12:09.267 iodepth=1 00:12:09.267 norandommap=0 00:12:09.267 numjobs=1 00:12:09.267 00:12:09.267 verify_dump=1 00:12:09.267 verify_backlog=512 00:12:09.267 verify_state_save=0 00:12:09.267 do_verify=1 00:12:09.267 verify=crc32c-intel 00:12:09.267 [job0] 00:12:09.267 filename=/dev/nvme0n1 00:12:09.267 [job1] 00:12:09.267 filename=/dev/nvme0n2 00:12:09.267 [job2] 00:12:09.267 filename=/dev/nvme0n3 00:12:09.267 [job3] 00:12:09.267 filename=/dev/nvme0n4 00:12:09.267 Could not set queue depth (nvme0n1) 00:12:09.267 Could not set queue depth (nvme0n2) 00:12:09.267 Could not set queue depth (nvme0n3) 00:12:09.267 Could not set queue depth (nvme0n4) 00:12:09.528 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.528 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.528 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.528 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.528 fio-3.35 00:12:09.528 Starting 4 threads 00:12:10.914 00:12:10.914 job0: (groupid=0, jobs=1): err= 0: pid=1574714: Fri Dec 6 17:28:02 2024 00:12:10.914 read: IOPS=18, BW=74.7KiB/s (76.4kB/s)(76.0KiB/1018msec) 00:12:10.914 slat (nsec): min=25675, max=26648, avg=25963.84, stdev=267.97 00:12:10.914 clat (usec): min=736, max=41040, avg=38844.77, stdev=9228.46 00:12:10.914 lat (usec): min=763, max=41066, avg=38870.73, stdev=9228.29 00:12:10.914 clat percentiles (usec): 00:12:10.914 | 1.00th=[ 734], 5.00th=[ 734], 10.00th=[40633], 20.00th=[41157], 00:12:10.914 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:10.914 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:10.914 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:10.914 | 99.99th=[41157] 00:12:10.914 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:12:10.914 slat (nsec): min=9034, max=70009, avg=31896.67, stdev=8289.48 00:12:10.914 clat (usec): min=124, max=1511, avg=505.23, stdev=135.42 00:12:10.914 lat (usec): min=137, max=1547, avg=537.13, stdev=138.16 00:12:10.914 clat percentiles (usec): 00:12:10.914 | 1.00th=[ 229], 5.00th=[ 285], 10.00th=[ 343], 20.00th=[ 383], 00:12:10.914 | 30.00th=[ 420], 40.00th=[ 474], 50.00th=[ 515], 60.00th=[ 545], 00:12:10.914 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 701], 00:12:10.914 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 1516], 99.95th=[ 1516], 00:12:10.914 | 99.99th=[ 1516] 00:12:10.914 bw ( KiB/s): min= 4096, max= 4096, per=42.81%, avg=4096.00, stdev= 0.00, samples=1 00:12:10.914 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:10.914 lat (usec) : 250=1.88%, 500=42.37%, 750=50.85%, 1000=1.32% 00:12:10.914 lat (msec) : 2=0.19%, 50=3.39% 00:12:10.914 cpu : usr=1.28%, sys=1.77%, ctx=532, majf=0, minf=1 00:12:10.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.914 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.914 job1: (groupid=0, jobs=1): err= 0: pid=1574736: Fri Dec 6 17:28:02 2024 00:12:10.914 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:10.914 slat (nsec): min=27203, max=46086, avg=27939.74, stdev=2012.92 00:12:10.914 clat (usec): min=399, max=41939, avg=1133.27, stdev=2555.83 00:12:10.914 lat (usec): min=427, max=41967, avg=1161.21, stdev=2555.79 00:12:10.914 clat percentiles (usec): 00:12:10.914 | 1.00th=[ 758], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 922], 00:12:10.914 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:12:10.914 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:12:10.914 | 99.00th=[ 1156], 99.50th=[ 1385], 99.90th=[41681], 99.95th=[41681], 00:12:10.914 | 99.99th=[41681] 00:12:10.914 write: IOPS=634, BW=2537KiB/s (2598kB/s)(2540KiB/1001msec); 0 zone resets 00:12:10.914 slat (nsec): min=9514, max=56162, avg=31064.70, stdev=10630.98 00:12:10.914 clat (usec): min=232, max=1485, avg=586.92, stdev=130.22 00:12:10.914 lat (usec): min=242, max=1497, avg=617.98, stdev=134.87 00:12:10.914 clat percentiles (usec): 00:12:10.914 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 424], 20.00th=[ 474], 00:12:10.914 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 627], 00:12:10.914 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 783], 00:12:10.914 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 1483], 99.95th=[ 1483], 00:12:10.914 | 99.99th=[ 1483] 00:12:10.914 bw ( KiB/s): min= 4096, max= 4096, per=42.81%, avg=4096.00, stdev= 0.00, samples=1 00:12:10.914 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:10.914 lat (usec) : 250=0.09%, 500=14.56%, 750=36.01%, 1000=30.95% 00:12:10.914 lat (msec) : 2=18.22%, 50=0.17% 00:12:10.914 cpu : usr=2.80%, sys=4.20%, ctx=1149, majf=0, minf=1 00:12:10.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.914 issued rwts: total=512,635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.914 job2: (groupid=0, jobs=1): err= 0: pid=1574746: Fri Dec 6 17:28:02 2024 00:12:10.914 read: IOPS=17, BW=71.9KiB/s (73.7kB/s)(72.0KiB/1001msec) 00:12:10.914 slat (nsec): min=25831, max=30073, avg=27138.83, stdev=870.20 00:12:10.914 clat (usec): min=938, max=42095, avg=36932.78, stdev=13073.01 00:12:10.914 lat (usec): min=964, max=42122, avg=36959.92, stdev=13072.72 00:12:10.914 clat percentiles (usec): 00:12:10.914 | 1.00th=[ 938], 5.00th=[ 938], 10.00th=[ 1090], 20.00th=[41157], 00:12:10.914 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:10.914 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:10.914 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:10.914 | 99.99th=[42206] 00:12:10.914 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:10.914 slat (nsec): min=10244, max=59964, avg=30898.65, stdev=10241.79 00:12:10.914 clat (usec): min=275, max=937, avg=610.13, stdev=114.66 00:12:10.914 lat (usec): min=285, max=992, avg=641.03, stdev=118.13 00:12:10.914 clat percentiles (usec): 00:12:10.914 | 1.00th=[ 326], 5.00th=[ 383], 10.00th=[ 449], 20.00th=[ 519], 00:12:10.914 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:12:10.915 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:12:10.915 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 938], 00:12:10.915 | 99.99th=[ 938] 00:12:10.915 bw ( KiB/s): min= 4096, max= 4096, per=42.81%, avg=4096.00, stdev= 0.00, samples=1 00:12:10.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:10.915 lat (usec) : 500=17.36%, 750=70.75%, 1000=8.68% 00:12:10.915 lat (msec) : 2=0.19%, 50=3.02% 00:12:10.915 cpu : usr=0.90%, sys=1.50%, ctx=532, majf=0, minf=1 00:12:10.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.915 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.915 job3: (groupid=0, jobs=1): err= 0: pid=1574750: Fri Dec 6 17:28:02 2024 00:12:10.915 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:10.915 slat (nsec): min=26372, max=53923, avg=27105.86, stdev=2180.38 00:12:10.915 clat (usec): min=661, max=1444, avg=975.81, stdev=71.38 00:12:10.915 lat (usec): min=687, max=1471, avg=1002.91, stdev=71.31 00:12:10.915 clat percentiles (usec): 00:12:10.915 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 930], 00:12:10.915 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:12:10.915 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:12:10.915 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1450], 99.95th=[ 1450], 00:12:10.915 | 99.99th=[ 1450] 00:12:10.915 write: IOPS=775, BW=3101KiB/s (3175kB/s)(3104KiB/1001msec); 0 zone resets 00:12:10.915 slat (nsec): min=9314, max=54593, avg=29838.51, stdev=9639.13 00:12:10.915 clat (usec): min=223, max=946, avg=585.01, stdev=113.04 00:12:10.915 lat (usec): min=234, max=981, avg=614.85, stdev=117.37 00:12:10.915 clat percentiles (usec): 00:12:10.915 | 1.00th=[ 306], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 494], 00:12:10.915 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:12:10.915 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:12:10.915 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 947], 99.95th=[ 947], 00:12:10.915 | 99.99th=[ 947] 00:12:10.915 bw ( KiB/s): min= 4096, max= 4096, per=42.81%, avg=4096.00, stdev= 0.00, samples=1 00:12:10.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:10.915 lat (usec) : 250=0.23%, 500=13.04%, 750=43.71%, 1000=28.26% 00:12:10.915 lat (msec) : 2=14.75% 00:12:10.915 cpu : usr=2.20%, sys=5.30%, ctx=1288, majf=0, minf=2 00:12:10.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.915 issued rwts: total=512,776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.915 00:12:10.915 Run status group 0 (all jobs): 00:12:10.915 READ: bw=4169KiB/s (4269kB/s), 71.9KiB/s-2046KiB/s (73.7kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1018msec 00:12:10.915 WRITE: bw=9568KiB/s (9797kB/s), 2012KiB/s-3101KiB/s (2060kB/s-3175kB/s), io=9740KiB (9974kB), run=1001-1018msec 00:12:10.915 00:12:10.915 Disk stats (read/write): 00:12:10.915 nvme0n1: ios=64/512, merge=0/0, ticks=583/182, in_queue=765, util=86.47% 00:12:10.915 nvme0n2: ios=465/512, merge=0/0, ticks=1074/244, in_queue=1318, util=96.32% 00:12:10.915 nvme0n3: ios=70/512, merge=0/0, ticks=1619/298, in_queue=1917, util=96.19% 00:12:10.915 nvme0n4: ios=505/512, merge=0/0, ticks=480/249, in_queue=729, util=89.39% 00:12:10.915 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:10.915 [global] 00:12:10.915 thread=1 00:12:10.915 invalidate=1 00:12:10.915 rw=randwrite 00:12:10.915 time_based=1 00:12:10.915 runtime=1 00:12:10.915 ioengine=libaio 00:12:10.915 direct=1 00:12:10.915 bs=4096 00:12:10.915 iodepth=1 00:12:10.915 norandommap=0 00:12:10.915 numjobs=1 00:12:10.915 00:12:10.915 verify_dump=1 00:12:10.915 verify_backlog=512 00:12:10.915 verify_state_save=0 00:12:10.915 do_verify=1 00:12:10.915 verify=crc32c-intel 00:12:10.915 [job0] 00:12:10.915 filename=/dev/nvme0n1 00:12:10.915 [job1] 00:12:10.915 filename=/dev/nvme0n2 00:12:10.915 [job2] 00:12:10.915 filename=/dev/nvme0n3 00:12:10.915 [job3] 00:12:10.915 filename=/dev/nvme0n4 00:12:10.915 Could not set queue depth (nvme0n1) 00:12:10.915 Could not set queue depth (nvme0n2) 00:12:10.915 Could not set queue depth (nvme0n3) 00:12:10.915 Could not set queue depth (nvme0n4) 00:12:11.175 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.175 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.175 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.175 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.175 fio-3.35 00:12:11.175 Starting 4 threads 00:12:12.559 00:12:12.559 job0: (groupid=0, jobs=1): err= 0: pid=1575227: Fri Dec 6 17:28:04 2024 00:12:12.559 read: IOPS=18, BW=72.9KiB/s (74.7kB/s)(76.0KiB/1042msec) 00:12:12.559 slat (nsec): min=26303, max=27029, avg=26529.95, stdev=175.49 00:12:12.559 clat (usec): min=40869, max=41975, avg=41406.57, stdev=483.29 00:12:12.559 lat (usec): min=40896, max=42002, avg=41433.10, stdev=483.29 00:12:12.559 clat percentiles (usec): 00:12:12.559 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:12.559 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:12:12.559 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:12.559 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:12.559 | 99.99th=[42206] 00:12:12.559 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:12:12.559 slat (nsec): min=8249, max=54217, avg=21966.01, stdev=11545.27 00:12:12.559 clat (usec): min=161, max=962, avg=468.77, stdev=173.99 00:12:12.559 lat (usec): min=172, max=993, avg=490.73, stdev=182.41 00:12:12.559 clat percentiles (usec): 00:12:12.559 | 1.00th=[ 215], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 285], 00:12:12.559 | 30.00th=[ 318], 40.00th=[ 388], 50.00th=[ 465], 60.00th=[ 515], 00:12:12.559 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 709], 95.00th=[ 766], 00:12:12.559 | 99.00th=[ 848], 99.50th=[ 906], 99.90th=[ 963], 99.95th=[ 963], 00:12:12.559 | 99.99th=[ 963] 00:12:12.559 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:12.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:12.559 lat (usec) : 250=4.71%, 500=50.09%, 750=35.97%, 1000=5.65% 00:12:12.559 lat (msec) : 50=3.58% 00:12:12.559 cpu : usr=0.77%, sys=1.06%, ctx=532, majf=0, minf=1 00:12:12.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.559 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.559 job1: (groupid=0, jobs=1): err= 0: pid=1575238: Fri Dec 6 17:28:04 2024 00:12:12.559 read: IOPS=17, BW=71.6KiB/s (73.4kB/s)(72.0KiB/1005msec) 00:12:12.559 slat (nsec): min=26584, max=27411, avg=27036.17, stdev=202.36 00:12:12.559 clat (usec): min=40888, max=41131, avg=40973.27, stdev=55.47 00:12:12.559 lat (usec): min=40914, max=41158, avg=41000.30, stdev=55.37 00:12:12.559 clat percentiles (usec): 00:12:12.559 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:12.559 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:12.559 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:12.559 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:12.559 | 99.99th=[41157] 00:12:12.559 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:12:12.559 slat (usec): min=9, max=27119, avg=80.41, stdev=1197.34 00:12:12.559 clat (usec): min=201, max=607, avg=431.11, stdev=72.29 00:12:12.559 lat (usec): min=228, max=27551, avg=511.52, stdev=1199.85 00:12:12.559 clat percentiles (usec): 00:12:12.559 | 1.00th=[ 249], 5.00th=[ 289], 10.00th=[ 326], 20.00th=[ 363], 00:12:12.559 | 30.00th=[ 408], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 465], 00:12:12.559 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 506], 95.00th=[ 523], 00:12:12.559 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 611], 99.95th=[ 611], 00:12:12.559 | 99.99th=[ 611] 00:12:12.559 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:12.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:12.559 lat (usec) : 250=1.13%, 500=82.83%, 750=12.64% 00:12:12.559 lat (msec) : 50=3.40% 00:12:12.559 cpu : usr=0.30%, sys=1.89%, ctx=533, majf=0, minf=1 00:12:12.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.560 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.560 job2: (groupid=0, jobs=1): err= 0: pid=1575256: Fri Dec 6 17:28:04 2024 00:12:12.560 read: IOPS=15, BW=63.7KiB/s (65.3kB/s)(64.0KiB/1004msec) 00:12:12.560 slat (nsec): min=26241, max=26736, avg=26527.94, stdev=125.14 00:12:12.560 clat (usec): min=40980, max=42040, avg=41842.24, stdev=323.78 00:12:12.560 lat (usec): min=41006, max=42066, avg=41868.76, stdev=323.82 00:12:12.560 clat percentiles (usec): 00:12:12.560 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:12:12.560 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:12.560 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:12.560 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:12.560 | 99.99th=[42206] 00:12:12.560 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:12:12.560 slat (nsec): min=9902, max=53041, avg=30046.25, stdev=9469.38 00:12:12.560 clat (usec): min=258, max=942, avg=612.41, stdev=122.98 00:12:12.560 lat (usec): min=292, max=976, avg=642.46, stdev=126.55 00:12:12.560 clat percentiles (usec): 00:12:12.560 | 1.00th=[ 318], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 510], 00:12:12.560 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:12:12.560 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:12:12.560 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 947], 00:12:12.560 | 99.99th=[ 947] 00:12:12.560 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:12.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:12.560 lat (usec) : 500=17.99%, 750=67.80%, 1000=11.17% 00:12:12.560 lat (msec) : 50=3.03% 00:12:12.560 cpu : usr=0.50%, sys=1.79%, ctx=530, majf=0, minf=1 00:12:12.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.560 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.560 job3: (groupid=0, jobs=1): err= 0: pid=1575263: Fri Dec 6 17:28:04 2024 00:12:12.560 read: IOPS=17, BW=69.1KiB/s (70.8kB/s)(72.0KiB/1042msec) 00:12:12.560 slat (nsec): min=26367, max=26831, avg=26608.17, stdev=120.58 00:12:12.560 clat (usec): min=1130, max=42077, avg=39610.00, stdev=9606.54 00:12:12.560 lat (usec): min=1157, max=42103, avg=39636.61, stdev=9606.48 00:12:12.560 clat percentiles (usec): 00:12:12.560 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:12:12.560 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:12.560 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:12.560 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:12.560 | 99.99th=[42206] 00:12:12.560 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:12:12.560 slat (nsec): min=8914, max=60412, avg=28149.14, stdev=9926.78 00:12:12.560 clat (usec): min=237, max=987, avg=605.11, stdev=138.30 00:12:12.560 lat (usec): min=250, max=1020, avg=633.25, stdev=143.48 00:12:12.560 clat percentiles (usec): 00:12:12.560 | 1.00th=[ 281], 5.00th=[ 355], 10.00th=[ 420], 20.00th=[ 482], 00:12:12.560 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[ 660], 00:12:12.560 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 799], 00:12:12.560 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 988], 99.95th=[ 988], 00:12:12.560 | 99.99th=[ 988] 00:12:12.560 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:12.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:12.560 lat (usec) : 250=0.38%, 500=22.45%, 750=62.26%, 1000=11.51% 00:12:12.560 lat (msec) : 2=0.19%, 50=3.21% 00:12:12.560 cpu : usr=1.54%, sys=1.25%, ctx=530, majf=0, minf=1 00:12:12.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.560 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.560 00:12:12.560 Run status group 0 (all jobs): 00:12:12.560 READ: bw=273KiB/s (279kB/s), 63.7KiB/s-72.9KiB/s (65.3kB/s-74.7kB/s), io=284KiB (291kB), run=1004-1042msec 00:12:12.560 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2040KiB/s (2013kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1042msec 00:12:12.560 00:12:12.560 Disk stats (read/write): 00:12:12.560 nvme0n1: ios=34/512, merge=0/0, ticks=894/222, in_queue=1116, util=89.28% 00:12:12.560 nvme0n2: ios=60/512, merge=0/0, ticks=905/218, in_queue=1123, util=90.62% 00:12:12.560 nvme0n3: ios=34/512, merge=0/0, ticks=1374/310, in_queue=1684, util=92.73% 00:12:12.560 nvme0n4: ios=70/512, merge=0/0, ticks=756/243, in_queue=999, util=99.57% 00:12:12.560 17:28:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:12.560 [global] 00:12:12.560 thread=1 00:12:12.560 invalidate=1 00:12:12.560 rw=write 00:12:12.560 time_based=1 00:12:12.560 runtime=1 00:12:12.560 ioengine=libaio 00:12:12.560 direct=1 00:12:12.560 bs=4096 00:12:12.560 iodepth=128 00:12:12.560 norandommap=0 00:12:12.560 numjobs=1 00:12:12.560 00:12:12.560 verify_dump=1 00:12:12.560 verify_backlog=512 00:12:12.560 verify_state_save=0 00:12:12.560 do_verify=1 00:12:12.560 verify=crc32c-intel 00:12:12.560 [job0] 00:12:12.560 filename=/dev/nvme0n1 00:12:12.560 [job1] 00:12:12.560 filename=/dev/nvme0n2 00:12:12.560 [job2] 00:12:12.560 filename=/dev/nvme0n3 00:12:12.560 [job3] 00:12:12.560 filename=/dev/nvme0n4 00:12:12.560 Could not set queue depth (nvme0n1) 00:12:12.560 Could not set queue depth (nvme0n2) 00:12:12.560 Could not set queue depth (nvme0n3) 00:12:12.560 Could not set queue depth (nvme0n4) 00:12:12.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.821 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.821 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.821 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.821 fio-3.35 00:12:12.821 Starting 4 threads 00:12:14.212 00:12:14.213 job0: (groupid=0, jobs=1): err= 0: pid=1575699: Fri Dec 6 17:28:06 2024 00:12:14.213 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:12:14.213 slat (nsec): min=924, max=8343.4k, avg=78117.56, stdev=517076.97 00:12:14.213 clat (usec): min=2808, max=37289, avg=9990.23, stdev=5302.90 00:12:14.213 lat (usec): min=2815, max=37296, avg=10068.35, stdev=5341.98 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 3687], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 6849], 00:12:14.213 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:12:14.213 | 70.00th=[10290], 80.00th=[11207], 90.00th=[14091], 95.00th=[22414], 00:12:14.213 | 99.00th=[33162], 99.50th=[35390], 99.90th=[35914], 99.95th=[37487], 00:12:14.213 | 99.99th=[37487] 00:12:14.213 write: IOPS=6280, BW=24.5MiB/s (25.7MB/s)(24.8MiB/1010msec); 0 zone resets 00:12:14.213 slat (nsec): min=1621, max=7198.5k, avg=75430.15, stdev=410436.39 00:12:14.213 clat (usec): min=815, max=57320, avg=10487.14, stdev=8950.11 00:12:14.213 lat (usec): min=824, max=57331, avg=10562.57, stdev=9011.52 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 2245], 5.00th=[ 3884], 10.00th=[ 4686], 20.00th=[ 6390], 00:12:14.213 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8160], 00:12:14.213 | 70.00th=[ 8455], 80.00th=[11076], 90.00th=[22152], 95.00th=[28443], 00:12:14.213 | 99.00th=[53740], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:12:14.213 | 99.99th=[57410] 00:12:14.213 bw ( KiB/s): min=16024, max=33704, per=26.53%, avg=24864.00, stdev=12501.65, samples=2 00:12:14.213 iops : min= 4006, max= 8426, avg=6216.00, stdev=3125.41, samples=2 00:12:14.213 lat (usec) : 1000=0.02% 00:12:14.213 lat (msec) : 2=0.37%, 4=3.61%, 10=69.46%, 20=17.23%, 50=8.45% 00:12:14.213 lat (msec) : 100=0.86% 00:12:14.213 cpu : usr=4.96%, sys=6.24%, ctx=676, majf=0, minf=1 00:12:14.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:14.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.213 issued rwts: total=6144,6343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.213 job1: (groupid=0, jobs=1): err= 0: pid=1575718: Fri Dec 6 17:28:06 2024 00:12:14.213 read: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(16.3MiB/1009msec) 00:12:14.213 slat (nsec): min=905, max=23430k, avg=116371.63, stdev=967355.34 00:12:14.213 clat (usec): min=3101, max=61458, avg=17009.38, stdev=12109.27 00:12:14.213 lat (usec): min=3120, max=61464, avg=17125.75, stdev=12201.08 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 3687], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7701], 00:12:14.213 | 30.00th=[ 8979], 40.00th=[10028], 50.00th=[12780], 60.00th=[15926], 00:12:14.213 | 70.00th=[19006], 80.00th=[23200], 90.00th=[38536], 95.00th=[42730], 00:12:14.213 | 99.00th=[52167], 99.50th=[54789], 99.90th=[61080], 99.95th=[61604], 00:12:14.213 | 99.99th=[61604] 00:12:14.213 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:12:14.213 slat (nsec): min=1573, max=16294k, avg=91443.39, stdev=616235.13 00:12:14.213 clat (usec): min=736, max=50854, avg=12381.93, stdev=9640.30 00:12:14.213 lat (usec): min=744, max=50864, avg=12473.37, stdev=9711.53 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 1483], 5.00th=[ 3556], 10.00th=[ 4752], 20.00th=[ 5866], 00:12:14.213 | 30.00th=[ 6456], 40.00th=[ 7242], 50.00th=[ 8848], 60.00th=[10159], 00:12:14.213 | 70.00th=[13698], 80.00th=[17957], 90.00th=[27395], 95.00th=[33162], 00:12:14.213 | 99.00th=[48497], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:12:14.213 | 99.99th=[50594] 00:12:14.213 bw ( KiB/s): min=11904, max=24576, per=19.46%, avg=18240.00, stdev=8960.46, samples=2 00:12:14.213 iops : min= 2976, max= 6144, avg=4560.00, stdev=2240.11, samples=2 00:12:14.213 lat (usec) : 750=0.06% 00:12:14.213 lat (msec) : 2=0.82%, 4=3.43%, 10=45.17%, 20=29.17%, 50=19.81% 00:12:14.213 lat (msec) : 100=1.55% 00:12:14.213 cpu : usr=3.87%, sys=4.86%, ctx=349, majf=0, minf=1 00:12:14.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:14.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.213 issued rwts: total=4175,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.213 job2: (groupid=0, jobs=1): err= 0: pid=1575739: Fri Dec 6 17:28:06 2024 00:12:14.213 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:12:14.213 slat (nsec): min=963, max=23559k, avg=71924.62, stdev=593305.71 00:12:14.213 clat (usec): min=2229, max=38414, avg=9795.23, stdev=5207.31 00:12:14.213 lat (usec): min=2238, max=38420, avg=9867.15, stdev=5233.58 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 3785], 5.00th=[ 4883], 10.00th=[ 5866], 20.00th=[ 6980], 00:12:14.213 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9110], 00:12:14.213 | 70.00th=[ 9503], 80.00th=[11469], 90.00th=[15401], 95.00th=[18744], 00:12:14.213 | 99.00th=[33817], 99.50th=[34341], 99.90th=[38536], 99.95th=[38536], 00:12:14.213 | 99.99th=[38536] 00:12:14.213 write: IOPS=7021, BW=27.4MiB/s (28.8MB/s)(27.7MiB/1009msec); 0 zone resets 00:12:14.213 slat (nsec): min=1642, max=13338k, avg=64014.41, stdev=516162.13 00:12:14.213 clat (usec): min=834, max=45401, avg=8843.45, stdev=5437.14 00:12:14.213 lat (usec): min=870, max=45433, avg=8907.47, stdev=5481.36 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 1729], 5.00th=[ 3523], 10.00th=[ 4228], 20.00th=[ 5669], 00:12:14.213 | 30.00th=[ 6194], 40.00th=[ 6652], 50.00th=[ 7373], 60.00th=[ 7963], 00:12:14.213 | 70.00th=[ 8979], 80.00th=[11207], 90.00th=[15926], 95.00th=[18482], 00:12:14.213 | 99.00th=[32113], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:12:14.213 | 99.99th=[45351] 00:12:14.213 bw ( KiB/s): min=23584, max=32080, per=29.69%, avg=27832.00, stdev=6007.58, samples=2 00:12:14.213 iops : min= 5896, max= 8020, avg=6958.00, stdev=1501.89, samples=2 00:12:14.213 lat (usec) : 1000=0.01% 00:12:14.213 lat (msec) : 2=0.70%, 4=4.33%, 10=69.57%, 20=21.16%, 50=4.24% 00:12:14.213 cpu : usr=5.56%, sys=7.74%, ctx=423, majf=0, minf=1 00:12:14.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:14.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.213 issued rwts: total=6656,7085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.213 job3: (groupid=0, jobs=1): err= 0: pid=1575746: Fri Dec 6 17:28:06 2024 00:12:14.213 read: IOPS=5183, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1009msec) 00:12:14.213 slat (nsec): min=979, max=22204k, avg=89949.16, stdev=750733.42 00:12:14.213 clat (usec): min=2206, max=42753, avg=11635.05, stdev=4962.19 00:12:14.213 lat (usec): min=2215, max=42783, avg=11725.00, stdev=5021.73 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 2409], 5.00th=[ 4424], 10.00th=[ 6718], 20.00th=[ 8029], 00:12:14.213 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[10814], 60.00th=[11863], 00:12:14.213 | 70.00th=[12911], 80.00th=[14615], 90.00th=[18220], 95.00th=[22152], 00:12:14.213 | 99.00th=[27657], 99.50th=[27657], 99.90th=[30016], 99.95th=[30016], 00:12:14.213 | 99.99th=[42730] 00:12:14.213 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:12:14.213 slat (nsec): min=1669, max=17700k, avg=84343.19, stdev=569363.48 00:12:14.213 clat (usec): min=1466, max=33340, avg=11877.02, stdev=6545.29 00:12:14.213 lat (usec): min=1633, max=33348, avg=11961.37, stdev=6593.47 00:12:14.213 clat percentiles (usec): 00:12:14.213 | 1.00th=[ 2573], 5.00th=[ 4228], 10.00th=[ 5538], 20.00th=[ 6849], 00:12:14.213 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[12125], 00:12:14.213 | 70.00th=[14615], 80.00th=[16319], 90.00th=[20841], 95.00th=[26608], 00:12:14.213 | 99.00th=[30802], 99.50th=[32113], 99.90th=[33162], 99.95th=[33424], 00:12:14.213 | 99.99th=[33424] 00:12:14.213 bw ( KiB/s): min=20480, max=24440, per=23.96%, avg=22460.00, stdev=2800.14, samples=2 00:12:14.213 iops : min= 5120, max= 6110, avg=5615.00, stdev=700.04, samples=2 00:12:14.213 lat (msec) : 2=0.13%, 4=4.05%, 10=42.38%, 20=43.67%, 50=9.78% 00:12:14.213 cpu : usr=3.77%, sys=6.85%, ctx=427, majf=0, minf=2 00:12:14.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:14.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.213 issued rwts: total=5230,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.213 00:12:14.213 Run status group 0 (all jobs): 00:12:14.213 READ: bw=85.9MiB/s (90.1MB/s), 16.2MiB/s-25.8MiB/s (16.9MB/s-27.0MB/s), io=86.7MiB (91.0MB), run=1009-1010msec 00:12:14.213 WRITE: bw=91.5MiB/s (96.0MB/s), 17.8MiB/s-27.4MiB/s (18.7MB/s-28.8MB/s), io=92.5MiB (96.9MB), run=1009-1010msec 00:12:14.213 00:12:14.213 Disk stats (read/write): 00:12:14.213 nvme0n1: ios=5677/5663, merge=0/0, ticks=41686/37185, in_queue=78871, util=85.57% 00:12:14.213 nvme0n2: ios=3639/4096, merge=0/0, ticks=36583/32157, in_queue=68740, util=90.32% 00:12:14.213 nvme0n3: ios=5564/5632, merge=0/0, ticks=39706/36555, in_queue=76261, util=93.36% 00:12:14.213 nvme0n4: ios=4159/4487, merge=0/0, ticks=46353/52164, in_queue=98517, util=97.01% 00:12:14.213 17:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:14.213 [global] 00:12:14.213 thread=1 00:12:14.213 invalidate=1 00:12:14.213 rw=randwrite 00:12:14.213 time_based=1 00:12:14.213 runtime=1 00:12:14.213 ioengine=libaio 00:12:14.213 direct=1 00:12:14.213 bs=4096 00:12:14.213 iodepth=128 00:12:14.213 norandommap=0 00:12:14.213 numjobs=1 00:12:14.213 00:12:14.213 verify_dump=1 00:12:14.213 verify_backlog=512 00:12:14.213 verify_state_save=0 00:12:14.213 do_verify=1 00:12:14.213 verify=crc32c-intel 00:12:14.213 [job0] 00:12:14.213 filename=/dev/nvme0n1 00:12:14.213 [job1] 00:12:14.213 filename=/dev/nvme0n2 00:12:14.213 [job2] 00:12:14.213 filename=/dev/nvme0n3 00:12:14.213 [job3] 00:12:14.213 filename=/dev/nvme0n4 00:12:14.213 Could not set queue depth (nvme0n1) 00:12:14.213 Could not set queue depth (nvme0n2) 00:12:14.213 Could not set queue depth (nvme0n3) 00:12:14.213 Could not set queue depth (nvme0n4) 00:12:14.473 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.473 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.473 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.473 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.473 fio-3.35 00:12:14.473 Starting 4 threads 00:12:15.857 00:12:15.857 job0: (groupid=0, jobs=1): err= 0: pid=1576189: Fri Dec 6 17:28:07 2024 00:12:15.857 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:12:15.857 slat (nsec): min=984, max=20203k, avg=116038.50, stdev=935363.67 00:12:15.857 clat (usec): min=5650, max=57110, avg=14677.00, stdev=8149.68 00:12:15.857 lat (usec): min=5657, max=57119, avg=14793.04, stdev=8245.89 00:12:15.857 clat percentiles (usec): 00:12:15.857 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 9503], 00:12:15.857 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11600], 60.00th=[13042], 00:12:15.857 | 70.00th=[15008], 80.00th=[19792], 90.00th=[25560], 95.00th=[31065], 00:12:15.857 | 99.00th=[48497], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:12:15.857 | 99.99th=[56886] 00:12:15.857 write: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(13.9MiB/1012msec); 0 zone resets 00:12:15.857 slat (nsec): min=1687, max=16779k, avg=173877.68, stdev=899767.19 00:12:15.857 clat (usec): min=3439, max=75395, avg=23222.14, stdev=18423.53 00:12:15.857 lat (usec): min=3597, max=75403, avg=23396.01, stdev=18547.76 00:12:15.857 clat percentiles (usec): 00:12:15.857 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 5342], 20.00th=[ 7046], 00:12:15.857 | 30.00th=[ 8455], 40.00th=[11863], 50.00th=[16581], 60.00th=[20579], 00:12:15.857 | 70.00th=[33817], 80.00th=[43254], 90.00th=[50070], 95.00th=[58983], 00:12:15.857 | 99.00th=[69731], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:12:15.857 | 99.99th=[74974] 00:12:15.857 bw ( KiB/s): min=12656, max=14848, per=22.80%, avg=13752.00, stdev=1549.98, samples=2 00:12:15.857 iops : min= 3164, max= 3712, avg=3438.00, stdev=387.49, samples=2 00:12:15.857 lat (msec) : 4=1.10%, 10=29.41%, 20=37.84%, 50=25.76%, 100=5.89% 00:12:15.857 cpu : usr=3.36%, sys=3.76%, ctx=258, majf=0, minf=1 00:12:15.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:15.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.857 issued rwts: total=3072,3566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.857 job1: (groupid=0, jobs=1): err= 0: pid=1576208: Fri Dec 6 17:28:07 2024 00:12:15.857 read: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1004msec) 00:12:15.857 slat (nsec): min=897, max=17749k, avg=108792.86, stdev=805881.87 00:12:15.857 clat (usec): min=903, max=92331, avg=12444.09, stdev=10856.43 00:12:15.857 lat (usec): min=1134, max=92337, avg=12552.88, stdev=10959.38 00:12:15.857 clat percentiles (usec): 00:12:15.857 | 1.00th=[ 1942], 5.00th=[ 3130], 10.00th=[ 4490], 20.00th=[ 6128], 00:12:15.857 | 30.00th=[ 6783], 40.00th=[ 7570], 50.00th=[ 8455], 60.00th=[11207], 00:12:15.857 | 70.00th=[14091], 80.00th=[16909], 90.00th=[24773], 95.00th=[30278], 00:12:15.857 | 99.00th=[65799], 99.50th=[82314], 99.90th=[92799], 99.95th=[92799], 00:12:15.857 | 99.99th=[92799] 00:12:15.857 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:12:15.857 slat (nsec): min=1468, max=25468k, avg=81215.39, stdev=639227.79 00:12:15.857 clat (usec): min=411, max=98615, avg=13339.23, stdev=18894.81 00:12:15.857 lat (usec): min=423, max=98626, avg=13420.44, stdev=18989.56 00:12:15.857 clat percentiles (usec): 00:12:15.857 | 1.00th=[ 750], 5.00th=[ 1270], 10.00th=[ 1844], 20.00th=[ 3163], 00:12:15.857 | 30.00th=[ 4621], 40.00th=[ 5538], 50.00th=[ 6063], 60.00th=[ 6390], 00:12:15.857 | 70.00th=[ 8356], 80.00th=[15008], 90.00th=[34866], 95.00th=[67634], 00:12:15.857 | 99.00th=[80217], 99.50th=[87557], 99.90th=[99091], 99.95th=[99091], 00:12:15.857 | 99.99th=[99091] 00:12:15.857 bw ( KiB/s): min=19712, max=21248, per=33.96%, avg=20480.00, stdev=1086.12, samples=2 00:12:15.857 iops : min= 4928, max= 5312, avg=5120.00, stdev=271.53, samples=2 00:12:15.857 lat (usec) : 500=0.04%, 750=0.48%, 1000=1.09% 00:12:15.857 lat (msec) : 2=4.51%, 4=11.71%, 10=46.36%, 20=20.36%, 50=10.42% 00:12:15.857 lat (msec) : 100=5.02% 00:12:15.857 cpu : usr=3.79%, sys=5.48%, ctx=364, majf=0, minf=1 00:12:15.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:15.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.857 issued rwts: total=4751,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.857 job2: (groupid=0, jobs=1): err= 0: pid=1576229: Fri Dec 6 17:28:07 2024 00:12:15.857 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:15.857 slat (nsec): min=1032, max=20101k, avg=128792.47, stdev=902110.90 00:12:15.857 clat (usec): min=4978, max=48586, avg=15202.27, stdev=7464.65 00:12:15.857 lat (usec): min=4986, max=48594, avg=15331.06, stdev=7553.04 00:12:15.857 clat percentiles (usec): 00:12:15.857 | 1.00th=[ 6980], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 8979], 00:12:15.857 | 30.00th=[10159], 40.00th=[11469], 50.00th=[12649], 60.00th=[14091], 00:12:15.857 | 70.00th=[16581], 80.00th=[23200], 90.00th=[25822], 95.00th=[28967], 00:12:15.857 | 99.00th=[36439], 99.50th=[43779], 99.90th=[48497], 99.95th=[48497], 00:12:15.857 | 99.99th=[48497] 00:12:15.858 write: IOPS=2610, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1004msec); 0 zone resets 00:12:15.858 slat (nsec): min=1697, max=18431k, avg=248036.21, stdev=1149225.19 00:12:15.858 clat (usec): min=2233, max=98533, avg=33537.87, stdev=24986.25 00:12:15.858 lat (usec): min=3938, max=98544, avg=33785.90, stdev=25163.01 00:12:15.858 clat percentiles (usec): 00:12:15.858 | 1.00th=[ 3982], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[10421], 00:12:15.858 | 30.00th=[16712], 40.00th=[19792], 50.00th=[26346], 60.00th=[38011], 00:12:15.858 | 70.00th=[44827], 80.00th=[56361], 90.00th=[68682], 95.00th=[85459], 00:12:15.858 | 99.00th=[96994], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:12:15.858 | 99.99th=[98042] 00:12:15.858 bw ( KiB/s): min= 9328, max=11152, per=16.98%, avg=10240.00, stdev=1289.76, samples=2 00:12:15.858 iops : min= 2332, max= 2788, avg=2560.00, stdev=322.44, samples=2 00:12:15.858 lat (msec) : 4=0.75%, 10=22.20%, 20=36.09%, 50=28.33%, 100=12.62% 00:12:15.858 cpu : usr=1.79%, sys=4.09%, ctx=250, majf=0, minf=2 00:12:15.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:15.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.858 issued rwts: total=2560,2621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.858 job3: (groupid=0, jobs=1): err= 0: pid=1576236: Fri Dec 6 17:28:07 2024 00:12:15.858 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:12:15.858 slat (nsec): min=1018, max=12242k, avg=117571.19, stdev=790550.89 00:12:15.858 clat (usec): min=5421, max=72150, avg=14064.48, stdev=9111.84 00:12:15.858 lat (usec): min=5424, max=72159, avg=14182.05, stdev=9199.91 00:12:15.858 clat percentiles (usec): 00:12:15.858 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 8586], 00:12:15.858 | 30.00th=[ 9372], 40.00th=[10814], 50.00th=[12518], 60.00th=[13042], 00:12:15.858 | 70.00th=[14615], 80.00th=[16581], 90.00th=[19792], 95.00th=[30016], 00:12:15.858 | 99.00th=[58983], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:12:15.858 | 99.99th=[71828] 00:12:15.858 write: IOPS=3903, BW=15.2MiB/s (16.0MB/s)(15.4MiB/1012msec); 0 zone resets 00:12:15.858 slat (nsec): min=1601, max=15266k, avg=140902.19, stdev=807045.30 00:12:15.858 clat (usec): min=1105, max=75497, avg=19751.00, stdev=16308.21 00:12:15.858 lat (usec): min=1116, max=75499, avg=19891.90, stdev=16409.98 00:12:15.858 clat percentiles (usec): 00:12:15.858 | 1.00th=[ 4424], 5.00th=[ 4817], 10.00th=[ 5407], 20.00th=[ 6849], 00:12:15.858 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[14222], 60.00th=[17171], 00:12:15.858 | 70.00th=[22676], 80.00th=[32637], 90.00th=[44827], 95.00th=[54789], 00:12:15.858 | 99.00th=[68682], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:12:15.858 | 99.99th=[74974] 00:12:15.858 bw ( KiB/s): min=14240, max=16336, per=25.35%, avg=15288.00, stdev=1482.10, samples=2 00:12:15.858 iops : min= 3560, max= 4084, avg=3822.00, stdev=370.52, samples=2 00:12:15.858 lat (msec) : 2=0.03%, 4=0.15%, 10=38.19%, 20=39.46%, 50=17.92% 00:12:15.858 lat (msec) : 100=4.26% 00:12:15.858 cpu : usr=3.46%, sys=4.25%, ctx=273, majf=0, minf=2 00:12:15.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:15.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.858 issued rwts: total=3584,3950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.858 00:12:15.858 Run status group 0 (all jobs): 00:12:15.858 READ: bw=53.9MiB/s (56.5MB/s), 9.96MiB/s-18.5MiB/s (10.4MB/s-19.4MB/s), io=54.6MiB (57.2MB), run=1004-1012msec 00:12:15.858 WRITE: bw=58.9MiB/s (61.8MB/s), 10.2MiB/s-19.9MiB/s (10.7MB/s-20.9MB/s), io=59.6MiB (62.5MB), run=1004-1012msec 00:12:15.858 00:12:15.858 Disk stats (read/write): 00:12:15.858 nvme0n1: ios=3100/3079, merge=0/0, ticks=44340/58577, in_queue=102917, util=96.39% 00:12:15.858 nvme0n2: ios=3556/3633, merge=0/0, ticks=45801/56867, in_queue=102668, util=88.18% 00:12:15.858 nvme0n3: ios=1553/1887, merge=0/0, ticks=27479/71311, in_queue=98790, util=92.50% 00:12:15.858 nvme0n4: ios=3085/3079, merge=0/0, ticks=44264/58496, in_queue=102760, util=95.94% 00:12:15.858 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:15.858 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1576484 00:12:15.858 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:15.858 17:28:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:15.858 [global] 00:12:15.858 thread=1 00:12:15.858 invalidate=1 00:12:15.858 rw=read 00:12:15.858 time_based=1 00:12:15.858 runtime=10 00:12:15.858 ioengine=libaio 00:12:15.858 direct=1 00:12:15.858 bs=4096 00:12:15.858 iodepth=1 00:12:15.858 norandommap=1 00:12:15.858 numjobs=1 00:12:15.858 00:12:15.858 [job0] 00:12:15.858 filename=/dev/nvme0n1 00:12:15.858 [job1] 00:12:15.858 filename=/dev/nvme0n2 00:12:15.858 [job2] 00:12:15.858 filename=/dev/nvme0n3 00:12:15.858 [job3] 00:12:15.858 filename=/dev/nvme0n4 00:12:15.858 Could not set queue depth (nvme0n1) 00:12:15.858 Could not set queue depth (nvme0n2) 00:12:15.858 Could not set queue depth (nvme0n3) 00:12:15.858 Could not set queue depth (nvme0n4) 00:12:16.426 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.426 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.426 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.426 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.426 fio-3.35 00:12:16.426 Starting 4 threads 00:12:18.969 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:18.969 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:18.969 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7507968, buflen=4096 00:12:18.969 fio: pid=1576720, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:19.230 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.230 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:19.230 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2420736, buflen=4096 00:12:19.230 fio: pid=1576713, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:19.491 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=843776, buflen=4096 00:12:19.491 fio: pid=1576690, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:19.491 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.491 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:19.491 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4923392, buflen=4096 00:12:19.491 fio: pid=1576697, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:19.491 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.491 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:19.491 00:12:19.491 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1576690: Fri Dec 6 17:28:11 2024 00:12:19.491 read: IOPS=69, BW=278KiB/s (285kB/s)(824KiB/2961msec) 00:12:19.491 slat (nsec): min=6258, max=57975, avg=25339.59, stdev=5687.97 00:12:19.491 clat (usec): min=726, max=42105, avg=14233.27, stdev=19126.94 00:12:19.491 lat (usec): min=734, max=42132, avg=14258.61, stdev=19127.91 00:12:19.491 clat percentiles (usec): 00:12:19.491 | 1.00th=[ 783], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 955], 00:12:19.491 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1074], 00:12:19.491 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:19.491 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:19.491 | 99.99th=[42206] 00:12:19.491 bw ( KiB/s): min= 96, max= 1144, per=6.40%, avg=312.00, stdev=465.17, samples=5 00:12:19.491 iops : min= 24, max= 286, avg=78.00, stdev=116.29, samples=5 00:12:19.491 lat (usec) : 750=0.48%, 1000=39.13% 00:12:19.491 lat (msec) : 2=27.54%, 50=32.37% 00:12:19.491 cpu : usr=0.14%, sys=0.24%, ctx=208, majf=0, minf=1 00:12:19.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.491 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.491 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.491 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1576697: Fri Dec 6 17:28:11 2024 00:12:19.491 read: IOPS=382, BW=1529KiB/s (1566kB/s)(4808KiB/3144msec) 00:12:19.491 slat (usec): min=6, max=17611, avg=50.97, stdev=613.92 00:12:19.491 clat (usec): min=370, max=42040, avg=2540.01, stdev=7732.14 00:12:19.491 lat (usec): min=397, max=42066, avg=2581.00, stdev=7745.60 00:12:19.491 clat percentiles (usec): 00:12:19.491 | 1.00th=[ 562], 5.00th=[ 725], 10.00th=[ 816], 20.00th=[ 922], 00:12:19.491 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1057], 00:12:19.491 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:12:19.491 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:12:19.492 | 99.99th=[42206] 00:12:19.492 bw ( KiB/s): min= 696, max= 2816, per=30.89%, avg=1506.67, stdev=871.75, samples=6 00:12:19.492 iops : min= 174, max= 704, avg=376.67, stdev=217.94, samples=6 00:12:19.492 lat (usec) : 500=0.17%, 750=6.07%, 1000=35.41% 00:12:19.492 lat (msec) : 2=54.36%, 4=0.08%, 50=3.82% 00:12:19.492 cpu : usr=0.80%, sys=1.40%, ctx=1206, majf=0, minf=2 00:12:19.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.492 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.492 issued rwts: total=1203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.492 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1576713: Fri Dec 6 17:28:11 2024 00:12:19.492 read: IOPS=210, BW=839KiB/s (859kB/s)(2364KiB/2818msec) 00:12:19.492 slat (usec): min=6, max=3751, avg=30.82, stdev=153.33 00:12:19.492 clat (usec): min=539, max=44137, avg=4695.28, stdev=11773.75 00:12:19.492 lat (usec): min=546, max=44967, avg=4726.10, stdev=11794.71 00:12:19.492 clat percentiles (usec): 00:12:19.492 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:12:19.492 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1004], 00:12:19.492 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1188], 95.00th=[41681], 00:12:19.492 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:12:19.492 | 99.99th=[44303] 00:12:19.492 bw ( KiB/s): min= 96, max= 2376, per=19.16%, avg=934.40, stdev=968.75, samples=5 00:12:19.492 iops : min= 24, max= 594, avg=233.60, stdev=242.19, samples=5 00:12:19.492 lat (usec) : 750=1.18%, 1000=58.61% 00:12:19.492 lat (msec) : 2=30.91%, 50=9.12% 00:12:19.492 cpu : usr=0.28%, sys=0.78%, ctx=594, majf=0, minf=2 00:12:19.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.492 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.492 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.492 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1576720: Fri Dec 6 17:28:11 2024 00:12:19.492 read: IOPS=700, BW=2802KiB/s (2869kB/s)(7332KiB/2617msec) 00:12:19.492 slat (nsec): min=7166, max=61005, avg=26532.05, stdev=2563.98 00:12:19.492 clat (usec): min=405, max=42067, avg=1382.44, stdev=4148.64 00:12:19.492 lat (usec): min=413, max=42094, avg=1408.98, stdev=4148.68 00:12:19.492 clat percentiles (usec): 00:12:19.492 | 1.00th=[ 603], 5.00th=[ 783], 10.00th=[ 857], 20.00th=[ 914], 00:12:19.492 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:12:19.492 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:12:19.492 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:19.492 | 99.99th=[42206] 00:12:19.492 bw ( KiB/s): min= 96, max= 4016, per=60.06%, avg=2928.00, stdev=1712.74, samples=5 00:12:19.492 iops : min= 24, max= 1004, avg=732.00, stdev=428.18, samples=5 00:12:19.492 lat (usec) : 500=0.60%, 750=2.78%, 1000=65.76% 00:12:19.492 lat (msec) : 2=29.77%, 50=1.04% 00:12:19.492 cpu : usr=0.54%, sys=2.41%, ctx=1839, majf=0, minf=2 00:12:19.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.492 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.492 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.492 00:12:19.492 Run status group 0 (all jobs): 00:12:19.492 READ: bw=4875KiB/s (4992kB/s), 278KiB/s-2802KiB/s (285kB/s-2869kB/s), io=15.0MiB (15.7MB), run=2617-3144msec 00:12:19.492 00:12:19.492 Disk stats (read/write): 00:12:19.492 nvme0n1: ios=203/0, merge=0/0, ticks=2806/0, in_queue=2806, util=94.76% 00:12:19.492 nvme0n2: ios=1180/0, merge=0/0, ticks=2963/0, in_queue=2963, util=95.14% 00:12:19.492 nvme0n3: ios=586/0, merge=0/0, ticks=2561/0, in_queue=2561, util=96.03% 00:12:19.492 nvme0n4: ios=1871/0, merge=0/0, ticks=3374/0, in_queue=3374, util=98.85% 00:12:19.752 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.752 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:20.013 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.013 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:20.014 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.014 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:20.275 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.275 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1576484 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:20.536 nvmf hotplug test: fio failed as expected 00:12:20.536 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.797 rmmod nvme_tcp 00:12:20.797 rmmod nvme_fabrics 00:12:20.797 rmmod nvme_keyring 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1572970 ']' 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1572970 00:12:20.797 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1572970 ']' 00:12:20.798 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1572970 00:12:20.798 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:20.798 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.798 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572970 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572970' 00:12:21.058 killing process with pid 1572970 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1572970 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1572970 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.058 17:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.600 00:12:23.600 real 0m29.349s 00:12:23.600 user 2m40.011s 00:12:23.600 sys 0m9.113s 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.600 ************************************ 00:12:23.600 END TEST nvmf_fio_target 00:12:23.600 ************************************ 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:23.600 ************************************ 00:12:23.600 START TEST nvmf_bdevio 00:12:23.600 ************************************ 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:23.600 * Looking for test storage... 00:12:23.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.600 --rc genhtml_branch_coverage=1 00:12:23.600 --rc genhtml_function_coverage=1 00:12:23.600 --rc genhtml_legend=1 00:12:23.600 --rc geninfo_all_blocks=1 00:12:23.600 --rc geninfo_unexecuted_blocks=1 00:12:23.600 00:12:23.600 ' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.600 --rc genhtml_branch_coverage=1 00:12:23.600 --rc genhtml_function_coverage=1 00:12:23.600 --rc genhtml_legend=1 00:12:23.600 --rc geninfo_all_blocks=1 00:12:23.600 --rc geninfo_unexecuted_blocks=1 00:12:23.600 00:12:23.600 ' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.600 --rc genhtml_branch_coverage=1 00:12:23.600 --rc genhtml_function_coverage=1 00:12:23.600 --rc genhtml_legend=1 00:12:23.600 --rc geninfo_all_blocks=1 00:12:23.600 --rc geninfo_unexecuted_blocks=1 00:12:23.600 00:12:23.600 ' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.600 --rc genhtml_branch_coverage=1 00:12:23.600 --rc genhtml_function_coverage=1 00:12:23.600 --rc genhtml_legend=1 00:12:23.600 --rc geninfo_all_blocks=1 00:12:23.600 --rc geninfo_unexecuted_blocks=1 00:12:23.600 00:12:23.600 ' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.600 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.601 17:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:31.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:31.740 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:31.740 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:31.740 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:12:31.740 00:12:31.740 --- 10.0.0.2 ping statistics --- 00:12:31.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.740 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:12:31.740 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:12:31.740 00:12:31.741 --- 10.0.0.1 ping statistics --- 00:12:31.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.741 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1581979 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1581979 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1581979 ']' 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.741 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.741 [2024-12-06 17:28:22.884849] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:12:31.741 [2024-12-06 17:28:22.884918] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.741 [2024-12-06 17:28:22.984472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.741 [2024-12-06 17:28:23.036972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.741 [2024-12-06 17:28:23.037028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.741 [2024-12-06 17:28:23.037036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.741 [2024-12-06 17:28:23.037044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.741 [2024-12-06 17:28:23.037050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.741 [2024-12-06 17:28:23.039402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:31.741 [2024-12-06 17:28:23.039563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:31.741 [2024-12-06 17:28:23.039722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:31.741 [2024-12-06 17:28:23.039902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.741 [2024-12-06 17:28:23.761037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.741 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.741 Malloc0 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.002 [2024-12-06 17:28:23.836230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:32.002 { 00:12:32.002 "params": { 00:12:32.002 "name": "Nvme$subsystem", 00:12:32.002 "trtype": "$TEST_TRANSPORT", 00:12:32.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.002 "adrfam": "ipv4", 00:12:32.002 "trsvcid": "$NVMF_PORT", 00:12:32.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.002 "hdgst": ${hdgst:-false}, 00:12:32.002 "ddgst": ${ddgst:-false} 00:12:32.002 }, 00:12:32.002 "method": "bdev_nvme_attach_controller" 00:12:32.002 } 00:12:32.002 EOF 00:12:32.002 )") 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:32.002 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:32.002 "params": { 00:12:32.002 "name": "Nvme1", 00:12:32.002 "trtype": "tcp", 00:12:32.002 "traddr": "10.0.0.2", 00:12:32.002 "adrfam": "ipv4", 00:12:32.002 "trsvcid": "4420", 00:12:32.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.002 "hdgst": false, 00:12:32.002 "ddgst": false 00:12:32.002 }, 00:12:32.002 "method": "bdev_nvme_attach_controller" 00:12:32.002 }' 00:12:32.002 [2024-12-06 17:28:23.896189] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:12:32.002 [2024-12-06 17:28:23.896259] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582069 ] 00:12:32.002 [2024-12-06 17:28:23.992630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.002 [2024-12-06 17:28:24.051731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.002 [2024-12-06 17:28:24.051944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.002 [2024-12-06 17:28:24.051945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.575 I/O targets: 00:12:32.575 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:32.575 00:12:32.575 00:12:32.575 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.575 http://cunit.sourceforge.net/ 00:12:32.575 00:12:32.575 00:12:32.575 Suite: bdevio tests on: Nvme1n1 00:12:32.575 Test: blockdev write read block ...passed 00:12:32.575 Test: blockdev write zeroes read block ...passed 00:12:32.575 Test: blockdev write zeroes read no split ...passed 00:12:32.575 Test: blockdev write zeroes read split ...passed 00:12:32.575 Test: blockdev write zeroes read split partial ...passed 00:12:32.575 Test: blockdev reset ...[2024-12-06 17:28:24.552654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:32.575 [2024-12-06 17:28:24.552754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c580 (9): Bad file descriptor 00:12:32.575 [2024-12-06 17:28:24.606911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:32.575 passed 00:12:32.575 Test: blockdev write read 8 blocks ...passed 00:12:32.575 Test: blockdev write read size > 128k ...passed 00:12:32.575 Test: blockdev write read invalid size ...passed 00:12:32.836 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.836 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.836 Test: blockdev write read max offset ...passed 00:12:32.836 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.836 Test: blockdev writev readv 8 blocks ...passed 00:12:32.836 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.836 Test: blockdev writev readv block ...passed 00:12:32.836 Test: blockdev writev readv size > 128k ...passed 00:12:32.836 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.836 Test: blockdev comparev and writev ...[2024-12-06 17:28:24.827942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.827979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.827995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.828003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.828356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.828368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.828382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.828390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.828775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.828788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.828801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.828810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.829178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.829189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:32.836 [2024-12-06 17:28:24.829203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:32.836 [2024-12-06 17:28:24.829210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:32.836 passed 00:12:33.098 Test: blockdev nvme passthru rw ...passed 00:12:33.098 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:28:24.911170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.098 [2024-12-06 17:28:24.911185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:33.098 [2024-12-06 17:28:24.911429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.098 [2024-12-06 17:28:24.911440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:33.098 [2024-12-06 17:28:24.911674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.098 [2024-12-06 17:28:24.911685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:33.098 [2024-12-06 17:28:24.911936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.098 [2024-12-06 17:28:24.911947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:33.098 passed 00:12:33.098 Test: blockdev nvme admin passthru ...passed 00:12:33.098 Test: blockdev copy ...passed 00:12:33.098 00:12:33.098 Run Summary: Type Total Ran Passed Failed Inactive 00:12:33.098 suites 1 1 n/a 0 0 00:12:33.098 tests 23 23 23 0 0 00:12:33.098 asserts 152 152 152 0 n/a 00:12:33.098 00:12:33.098 Elapsed time = 1.183 seconds 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.098 rmmod nvme_tcp 00:12:33.098 rmmod nvme_fabrics 00:12:33.098 rmmod nvme_keyring 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1581979 ']' 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1581979 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1581979 ']' 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1581979 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.098 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581979 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581979' 00:12:33.358 killing process with pid 1581979 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1581979 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1581979 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.358 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.902 00:12:35.902 real 0m12.283s 00:12:35.902 user 0m13.855s 00:12:35.902 sys 0m6.235s 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.902 ************************************ 00:12:35.902 END TEST nvmf_bdevio 00:12:35.902 ************************************ 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:35.902 00:12:35.902 real 5m5.187s 00:12:35.902 user 11m56.213s 00:12:35.902 sys 1m51.944s 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:35.902 ************************************ 00:12:35.902 END TEST nvmf_target_core 00:12:35.902 ************************************ 00:12:35.902 17:28:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:35.902 17:28:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.902 17:28:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.902 17:28:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.902 ************************************ 00:12:35.902 START TEST nvmf_target_extra 00:12:35.902 ************************************ 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:35.902 * Looking for test storage... 00:12:35.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.902 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.903 --rc genhtml_branch_coverage=1 00:12:35.903 --rc genhtml_function_coverage=1 00:12:35.903 --rc genhtml_legend=1 00:12:35.903 --rc geninfo_all_blocks=1 00:12:35.903 --rc geninfo_unexecuted_blocks=1 00:12:35.903 00:12:35.903 ' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.903 --rc genhtml_branch_coverage=1 00:12:35.903 --rc genhtml_function_coverage=1 00:12:35.903 --rc genhtml_legend=1 00:12:35.903 --rc geninfo_all_blocks=1 00:12:35.903 --rc geninfo_unexecuted_blocks=1 00:12:35.903 00:12:35.903 ' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.903 --rc genhtml_branch_coverage=1 00:12:35.903 --rc genhtml_function_coverage=1 00:12:35.903 --rc genhtml_legend=1 00:12:35.903 --rc geninfo_all_blocks=1 00:12:35.903 --rc geninfo_unexecuted_blocks=1 00:12:35.903 00:12:35.903 ' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.903 --rc genhtml_branch_coverage=1 00:12:35.903 --rc genhtml_function_coverage=1 00:12:35.903 --rc genhtml_legend=1 00:12:35.903 --rc geninfo_all_blocks=1 00:12:35.903 --rc geninfo_unexecuted_blocks=1 00:12:35.903 00:12:35.903 ' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.903 ************************************ 00:12:35.903 START TEST nvmf_example 00:12:35.903 ************************************ 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:35.903 * Looking for test storage... 00:12:35.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.903 17:28:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.165 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.166 --rc genhtml_legend=1 00:12:36.166 --rc geninfo_all_blocks=1 00:12:36.166 --rc geninfo_unexecuted_blocks=1 00:12:36.166 00:12:36.166 ' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.166 --rc genhtml_legend=1 00:12:36.166 --rc geninfo_all_blocks=1 00:12:36.166 --rc geninfo_unexecuted_blocks=1 00:12:36.166 00:12:36.166 ' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.166 --rc genhtml_legend=1 00:12:36.166 --rc geninfo_all_blocks=1 00:12:36.166 --rc geninfo_unexecuted_blocks=1 00:12:36.166 00:12:36.166 ' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.166 --rc genhtml_legend=1 00:12:36.166 --rc geninfo_all_blocks=1 00:12:36.166 --rc geninfo_unexecuted_blocks=1 00:12:36.166 00:12:36.166 ' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.166 17:28:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:44.309 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:44.309 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:44.309 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:44.309 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.309 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:12:44.310 00:12:44.310 --- 10.0.0.2 ping statistics --- 00:12:44.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.310 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:12:44.310 00:12:44.310 --- 10.0.0.1 ping statistics --- 00:12:44.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.310 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1586785 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1586785 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1586785 ']' 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.310 17:28:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.570 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.571 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:44.831 17:28:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:54.835 Initializing NVMe Controllers 00:12:54.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:54.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:54.835 Initialization complete. Launching workers. 00:12:54.835 ======================================================== 00:12:54.835 Latency(us) 00:12:54.835 Device Information : IOPS MiB/s Average min max 00:12:54.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18671.09 72.93 3427.26 614.69 16348.20 00:12:54.835 ======================================================== 00:12:54.835 Total : 18671.09 72.93 3427.26 614.69 16348.20 00:12:54.835 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.835 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.097 rmmod nvme_tcp 00:12:55.097 rmmod nvme_fabrics 00:12:55.097 rmmod nvme_keyring 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1586785 ']' 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1586785 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1586785 ']' 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1586785 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.097 17:28:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1586785 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1586785' 00:12:55.097 killing process with pid 1586785 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1586785 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1586785 00:12:55.097 nvmf threads initialize successfully 00:12:55.097 bdev subsystem init successfully 00:12:55.097 created a nvmf target service 00:12:55.097 create targets's poll groups done 00:12:55.097 all subsystems of target started 00:12:55.097 nvmf target is running 00:12:55.097 all subsystems of target stopped 00:12:55.097 destroy targets's poll groups done 00:12:55.097 destroyed the nvmf target service 00:12:55.097 bdev subsystem finish successfully 00:12:55.097 nvmf threads destroy successfully 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.097 17:28:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:57.647 00:12:57.647 real 0m21.444s 00:12:57.647 user 0m46.088s 00:12:57.647 sys 0m7.295s 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:57.647 ************************************ 00:12:57.647 END TEST nvmf_example 00:12:57.647 ************************************ 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.647 ************************************ 00:12:57.647 START TEST nvmf_filesystem 00:12:57.647 ************************************ 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:57.647 * Looking for test storage... 00:12:57.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.647 --rc genhtml_branch_coverage=1 00:12:57.647 --rc genhtml_function_coverage=1 00:12:57.647 --rc genhtml_legend=1 00:12:57.647 --rc geninfo_all_blocks=1 00:12:57.647 --rc geninfo_unexecuted_blocks=1 00:12:57.647 00:12:57.647 ' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.647 --rc genhtml_branch_coverage=1 00:12:57.647 --rc genhtml_function_coverage=1 00:12:57.647 --rc genhtml_legend=1 00:12:57.647 --rc geninfo_all_blocks=1 00:12:57.647 --rc geninfo_unexecuted_blocks=1 00:12:57.647 00:12:57.647 ' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.647 --rc genhtml_branch_coverage=1 00:12:57.647 --rc genhtml_function_coverage=1 00:12:57.647 --rc genhtml_legend=1 00:12:57.647 --rc geninfo_all_blocks=1 00:12:57.647 --rc geninfo_unexecuted_blocks=1 00:12:57.647 00:12:57.647 ' 00:12:57.647 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:57.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.647 --rc genhtml_branch_coverage=1 00:12:57.647 --rc genhtml_function_coverage=1 00:12:57.647 --rc genhtml_legend=1 00:12:57.648 --rc geninfo_all_blocks=1 00:12:57.648 --rc geninfo_unexecuted_blocks=1 00:12:57.648 00:12:57.648 ' 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:57.648 #define SPDK_CONFIG_H 00:12:57.648 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:57.648 #define SPDK_CONFIG_APPS 1 00:12:57.648 #define SPDK_CONFIG_ARCH native 00:12:57.648 #undef SPDK_CONFIG_ASAN 00:12:57.648 #undef SPDK_CONFIG_AVAHI 00:12:57.648 #undef SPDK_CONFIG_CET 00:12:57.648 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:57.648 #define SPDK_CONFIG_COVERAGE 1 00:12:57.648 #define SPDK_CONFIG_CROSS_PREFIX 00:12:57.648 #undef SPDK_CONFIG_CRYPTO 00:12:57.648 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:57.648 #undef SPDK_CONFIG_CUSTOMOCF 00:12:57.648 #undef SPDK_CONFIG_DAOS 00:12:57.648 #define SPDK_CONFIG_DAOS_DIR 00:12:57.648 #define SPDK_CONFIG_DEBUG 1 00:12:57.648 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:57.648 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:57.648 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:57.648 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:57.648 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:57.648 #undef SPDK_CONFIG_DPDK_UADK 00:12:57.648 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:57.648 #define SPDK_CONFIG_EXAMPLES 1 00:12:57.648 #undef SPDK_CONFIG_FC 00:12:57.648 #define SPDK_CONFIG_FC_PATH 00:12:57.648 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:57.648 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:57.648 #define SPDK_CONFIG_FSDEV 1 00:12:57.648 #undef SPDK_CONFIG_FUSE 00:12:57.648 #undef SPDK_CONFIG_FUZZER 00:12:57.648 #define SPDK_CONFIG_FUZZER_LIB 00:12:57.648 #undef SPDK_CONFIG_GOLANG 00:12:57.648 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:57.648 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:57.648 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:57.648 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:57.648 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:57.648 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:57.648 #undef SPDK_CONFIG_HAVE_LZ4 00:12:57.648 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:57.648 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:57.648 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:57.648 #define SPDK_CONFIG_IDXD 1 00:12:57.648 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:57.648 #undef SPDK_CONFIG_IPSEC_MB 00:12:57.648 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:57.648 #define SPDK_CONFIG_ISAL 1 00:12:57.648 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:57.648 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:57.648 #define SPDK_CONFIG_LIBDIR 00:12:57.648 #undef SPDK_CONFIG_LTO 00:12:57.648 #define SPDK_CONFIG_MAX_LCORES 128 00:12:57.648 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:57.648 #define SPDK_CONFIG_NVME_CUSE 1 00:12:57.648 #undef SPDK_CONFIG_OCF 00:12:57.648 #define SPDK_CONFIG_OCF_PATH 00:12:57.648 #define SPDK_CONFIG_OPENSSL_PATH 00:12:57.648 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:57.648 #define SPDK_CONFIG_PGO_DIR 00:12:57.648 #undef SPDK_CONFIG_PGO_USE 00:12:57.648 #define SPDK_CONFIG_PREFIX /usr/local 00:12:57.648 #undef SPDK_CONFIG_RAID5F 00:12:57.648 #undef SPDK_CONFIG_RBD 00:12:57.648 #define SPDK_CONFIG_RDMA 1 00:12:57.648 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:57.648 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:57.648 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:57.648 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:57.648 #define SPDK_CONFIG_SHARED 1 00:12:57.648 #undef SPDK_CONFIG_SMA 00:12:57.648 #define SPDK_CONFIG_TESTS 1 00:12:57.648 #undef SPDK_CONFIG_TSAN 00:12:57.648 #define SPDK_CONFIG_UBLK 1 00:12:57.648 #define SPDK_CONFIG_UBSAN 1 00:12:57.648 #undef SPDK_CONFIG_UNIT_TESTS 00:12:57.648 #undef SPDK_CONFIG_URING 00:12:57.648 #define SPDK_CONFIG_URING_PATH 00:12:57.648 #undef SPDK_CONFIG_URING_ZNS 00:12:57.648 #undef SPDK_CONFIG_USDT 00:12:57.648 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:57.648 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:57.648 #define SPDK_CONFIG_VFIO_USER 1 00:12:57.648 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:57.648 #define SPDK_CONFIG_VHOST 1 00:12:57.648 #define SPDK_CONFIG_VIRTIO 1 00:12:57.648 #undef SPDK_CONFIG_VTUNE 00:12:57.648 #define SPDK_CONFIG_VTUNE_DIR 00:12:57.648 #define SPDK_CONFIG_WERROR 1 00:12:57.648 #define SPDK_CONFIG_WPDK_DIR 00:12:57.648 #undef SPDK_CONFIG_XNVME 00:12:57.648 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:57.648 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:57.649 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1589575 ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1589575 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tsxsNg 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tsxsNg/tests/target /tmp/spdk.tsxsNg 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123391897600 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356521472 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5964623872 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668229632 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847951360 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23355392 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678035456 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=225280 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:57.650 * Looking for test storage... 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123391897600 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8179216384 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:57.650 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.913 --rc genhtml_branch_coverage=1 00:12:57.913 --rc genhtml_function_coverage=1 00:12:57.913 --rc genhtml_legend=1 00:12:57.913 --rc geninfo_all_blocks=1 00:12:57.913 --rc geninfo_unexecuted_blocks=1 00:12:57.913 00:12:57.913 ' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.913 --rc genhtml_branch_coverage=1 00:12:57.913 --rc genhtml_function_coverage=1 00:12:57.913 --rc genhtml_legend=1 00:12:57.913 --rc geninfo_all_blocks=1 00:12:57.913 --rc geninfo_unexecuted_blocks=1 00:12:57.913 00:12:57.913 ' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.913 --rc genhtml_branch_coverage=1 00:12:57.913 --rc genhtml_function_coverage=1 00:12:57.913 --rc genhtml_legend=1 00:12:57.913 --rc geninfo_all_blocks=1 00:12:57.913 --rc geninfo_unexecuted_blocks=1 00:12:57.913 00:12:57.913 ' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.913 --rc genhtml_branch_coverage=1 00:12:57.913 --rc genhtml_function_coverage=1 00:12:57.913 --rc genhtml_legend=1 00:12:57.913 --rc geninfo_all_blocks=1 00:12:57.913 --rc geninfo_unexecuted_blocks=1 00:12:57.913 00:12:57.913 ' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.913 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.914 17:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.061 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:06.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:06.062 17:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:06.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:06.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:06.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:13:06.062 00:13:06.062 --- 10.0.0.2 ping statistics --- 00:13:06.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.062 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:13:06.062 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:13:06.062 00:13:06.062 --- 10.0.0.1 ping statistics --- 00:13:06.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.062 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:06.063 ************************************ 00:13:06.063 START TEST nvmf_filesystem_no_in_capsule 00:13:06.063 ************************************ 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1593213 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1593213 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1593213 ']' 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.063 17:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.063 [2024-12-06 17:28:57.454528] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:13:06.063 [2024-12-06 17:28:57.454590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.063 [2024-12-06 17:28:57.552157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.063 [2024-12-06 17:28:57.605020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.063 [2024-12-06 17:28:57.605074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.063 [2024-12-06 17:28:57.605086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.063 [2024-12-06 17:28:57.605100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.063 [2024-12-06 17:28:57.605108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.063 [2024-12-06 17:28:57.607504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.063 [2024-12-06 17:28:57.607683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.063 [2024-12-06 17:28:57.607858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.063 [2024-12-06 17:28:57.607859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.326 [2024-12-06 17:28:58.329661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.326 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.588 Malloc1 00:13:06.588 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.588 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:06.588 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.589 [2024-12-06 17:28:58.487828] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:06.589 { 00:13:06.589 "name": "Malloc1", 00:13:06.589 "aliases": [ 00:13:06.589 "5c017325-013b-41ad-9828-ff764ba55a3f" 00:13:06.589 ], 00:13:06.589 "product_name": "Malloc disk", 00:13:06.589 "block_size": 512, 00:13:06.589 "num_blocks": 1048576, 00:13:06.589 "uuid": "5c017325-013b-41ad-9828-ff764ba55a3f", 00:13:06.589 "assigned_rate_limits": { 00:13:06.589 "rw_ios_per_sec": 0, 00:13:06.589 "rw_mbytes_per_sec": 0, 00:13:06.589 "r_mbytes_per_sec": 0, 00:13:06.589 "w_mbytes_per_sec": 0 00:13:06.589 }, 00:13:06.589 "claimed": true, 00:13:06.589 "claim_type": "exclusive_write", 00:13:06.589 "zoned": false, 00:13:06.589 "supported_io_types": { 00:13:06.589 "read": true, 00:13:06.589 "write": true, 00:13:06.589 "unmap": true, 00:13:06.589 "flush": true, 00:13:06.589 "reset": true, 00:13:06.589 "nvme_admin": false, 00:13:06.589 "nvme_io": false, 00:13:06.589 "nvme_io_md": false, 00:13:06.589 "write_zeroes": true, 00:13:06.589 "zcopy": true, 00:13:06.589 "get_zone_info": false, 00:13:06.589 "zone_management": false, 00:13:06.589 "zone_append": false, 00:13:06.589 "compare": false, 00:13:06.589 "compare_and_write": false, 00:13:06.589 "abort": true, 00:13:06.589 "seek_hole": false, 00:13:06.589 "seek_data": false, 00:13:06.589 "copy": true, 00:13:06.589 "nvme_iov_md": false 00:13:06.589 }, 00:13:06.589 "memory_domains": [ 00:13:06.589 { 00:13:06.589 "dma_device_id": "system", 00:13:06.589 "dma_device_type": 1 00:13:06.589 }, 00:13:06.589 { 00:13:06.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.589 "dma_device_type": 2 00:13:06.589 } 00:13:06.589 ], 00:13:06.589 "driver_specific": {} 00:13:06.589 } 00:13:06.589 ]' 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:06.589 17:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.539 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.539 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.539 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.539 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.539 17:29:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:10.455 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:10.715 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:10.975 17:29:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:11.917 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:11.917 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:11.917 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:11.917 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.917 17:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.200 ************************************ 00:13:12.200 START TEST filesystem_ext4 00:13:12.200 ************************************ 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:12.200 mke2fs 1.47.0 (5-Feb-2023) 00:13:12.200 Discarding device blocks: 0/522240 done 00:13:12.200 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:12.200 Filesystem UUID: d3327700-6bf7-4d03-91bd-1e2e468c467a 00:13:12.200 Superblock backups stored on blocks: 00:13:12.200 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:12.200 00:13:12.200 Allocating group tables: 0/64 done 00:13:12.200 Writing inode tables: 0/64 done 00:13:12.200 Creating journal (8192 blocks): done 00:13:12.200 Writing superblocks and filesystem accounting information: 0/64 done 00:13:12.200 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:12.200 17:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1593213 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:18.921 00:13:18.921 real 0m5.826s 00:13:18.921 user 0m0.032s 00:13:18.921 sys 0m0.072s 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:18.921 ************************************ 00:13:18.921 END TEST filesystem_ext4 00:13:18.921 ************************************ 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.921 ************************************ 00:13:18.921 START TEST filesystem_btrfs 00:13:18.921 ************************************ 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:18.921 17:29:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:18.921 btrfs-progs v6.8.1 00:13:18.921 See https://btrfs.readthedocs.io for more information. 00:13:18.921 00:13:18.921 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:18.921 NOTE: several default settings have changed in version 5.15, please make sure 00:13:18.921 this does not affect your deployments: 00:13:18.921 - DUP for metadata (-m dup) 00:13:18.921 - enabled no-holes (-O no-holes) 00:13:18.921 - enabled free-space-tree (-R free-space-tree) 00:13:18.921 00:13:18.921 Label: (null) 00:13:18.921 UUID: d5dbc4fc-6733-4615-9092-2d99533610f2 00:13:18.921 Node size: 16384 00:13:18.921 Sector size: 4096 (CPU page size: 4096) 00:13:18.921 Filesystem size: 510.00MiB 00:13:18.921 Block group profiles: 00:13:18.921 Data: single 8.00MiB 00:13:18.921 Metadata: DUP 32.00MiB 00:13:18.921 System: DUP 8.00MiB 00:13:18.921 SSD detected: yes 00:13:18.921 Zoned device: no 00:13:18.921 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:18.921 Checksum: crc32c 00:13:18.921 Number of devices: 1 00:13:18.921 Devices: 00:13:18.921 ID SIZE PATH 00:13:18.921 1 510.00MiB /dev/nvme0n1p1 00:13:18.921 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1593213 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:18.921 00:13:18.921 real 0m1.057s 00:13:18.921 user 0m0.030s 00:13:18.921 sys 0m0.115s 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.921 17:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:18.921 ************************************ 00:13:18.921 END TEST filesystem_btrfs 00:13:18.921 ************************************ 00:13:19.179 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.180 ************************************ 00:13:19.180 START TEST filesystem_xfs 00:13:19.180 ************************************ 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:19.180 17:29:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:19.180 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:19.180 = sectsz=512 attr=2, projid32bit=1 00:13:19.180 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:19.180 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:19.180 data = bsize=4096 blocks=130560, imaxpct=25 00:13:19.180 = sunit=0 swidth=0 blks 00:13:19.180 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:19.180 log =internal log bsize=4096 blocks=16384, version=2 00:13:19.180 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:19.180 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:20.119 Discarding blocks...Done. 00:13:20.119 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:20.119 17:29:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:22.027 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:22.027 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:22.027 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:22.027 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1593213 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:22.028 00:13:22.028 real 0m2.880s 00:13:22.028 user 0m0.023s 00:13:22.028 sys 0m0.082s 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 ************************************ 00:13:22.028 END TEST filesystem_xfs 00:13:22.028 ************************************ 00:13:22.028 17:29:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:22.288 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:22.288 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1593213 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1593213 ']' 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1593213 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593213 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593213' 00:13:22.548 killing process with pid 1593213 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1593213 00:13:22.548 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1593213 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:22.809 00:13:22.809 real 0m17.312s 00:13:22.809 user 1m8.326s 00:13:22.809 sys 0m1.438s 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.809 ************************************ 00:13:22.809 END TEST nvmf_filesystem_no_in_capsule 00:13:22.809 ************************************ 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:22.809 ************************************ 00:13:22.809 START TEST nvmf_filesystem_in_capsule 00:13:22.809 ************************************ 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1596813 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1596813 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1596813 ']' 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.809 17:29:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.809 [2024-12-06 17:29:14.839084] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:13:22.809 [2024-12-06 17:29:14.839132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.070 [2024-12-06 17:29:14.928001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.070 [2024-12-06 17:29:14.960463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.070 [2024-12-06 17:29:14.960495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.070 [2024-12-06 17:29:14.960501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.070 [2024-12-06 17:29:14.960506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.070 [2024-12-06 17:29:14.960510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.070 [2024-12-06 17:29:14.961756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.070 [2024-12-06 17:29:14.961909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.070 [2024-12-06 17:29:14.962060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.070 [2024-12-06 17:29:14.962062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.639 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 [2024-12-06 17:29:15.689600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.899 Malloc1 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.899 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.900 [2024-12-06 17:29:15.830284] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:23.900 { 00:13:23.900 "name": "Malloc1", 00:13:23.900 "aliases": [ 00:13:23.900 "8ba56783-0cd0-475e-9e8b-19e239c0362a" 00:13:23.900 ], 00:13:23.900 "product_name": "Malloc disk", 00:13:23.900 "block_size": 512, 00:13:23.900 "num_blocks": 1048576, 00:13:23.900 "uuid": "8ba56783-0cd0-475e-9e8b-19e239c0362a", 00:13:23.900 "assigned_rate_limits": { 00:13:23.900 "rw_ios_per_sec": 0, 00:13:23.900 "rw_mbytes_per_sec": 0, 00:13:23.900 "r_mbytes_per_sec": 0, 00:13:23.900 "w_mbytes_per_sec": 0 00:13:23.900 }, 00:13:23.900 "claimed": true, 00:13:23.900 "claim_type": "exclusive_write", 00:13:23.900 "zoned": false, 00:13:23.900 "supported_io_types": { 00:13:23.900 "read": true, 00:13:23.900 "write": true, 00:13:23.900 "unmap": true, 00:13:23.900 "flush": true, 00:13:23.900 "reset": true, 00:13:23.900 "nvme_admin": false, 00:13:23.900 "nvme_io": false, 00:13:23.900 "nvme_io_md": false, 00:13:23.900 "write_zeroes": true, 00:13:23.900 "zcopy": true, 00:13:23.900 "get_zone_info": false, 00:13:23.900 "zone_management": false, 00:13:23.900 "zone_append": false, 00:13:23.900 "compare": false, 00:13:23.900 "compare_and_write": false, 00:13:23.900 "abort": true, 00:13:23.900 "seek_hole": false, 00:13:23.900 "seek_data": false, 00:13:23.900 "copy": true, 00:13:23.900 "nvme_iov_md": false 00:13:23.900 }, 00:13:23.900 "memory_domains": [ 00:13:23.900 { 00:13:23.900 "dma_device_id": "system", 00:13:23.900 "dma_device_type": 1 00:13:23.900 }, 00:13:23.900 { 00:13:23.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.900 "dma_device_type": 2 00:13:23.900 } 00:13:23.900 ], 00:13:23.900 "driver_specific": {} 00:13:23.900 } 00:13:23.900 ]' 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:23.900 17:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.814 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.814 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:25.814 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.814 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:25.814 17:29:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:27.727 17:29:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:28.671 17:29:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.613 ************************************ 00:13:29.613 START TEST filesystem_in_capsule_ext4 00:13:29.613 ************************************ 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:29.613 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:29.614 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:29.614 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:29.614 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:29.614 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:29.614 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:29.614 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:29.614 mke2fs 1.47.0 (5-Feb-2023) 00:13:29.614 Discarding device blocks: 0/522240 done 00:13:29.614 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:29.614 Filesystem UUID: 5af9c71f-fbc3-42ff-94f1-8294ef329f8f 00:13:29.614 Superblock backups stored on blocks: 00:13:29.614 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:29.614 00:13:29.614 Allocating group tables: 0/64 done 00:13:29.614 Writing inode tables: 0/64 done 00:13:29.614 Creating journal (8192 blocks): done 00:13:29.874 Writing superblocks and filesystem accounting information: 0/64 done 00:13:29.874 00:13:29.874 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:29.874 17:29:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1596813 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:36.458 00:13:36.458 real 0m6.332s 00:13:36.458 user 0m0.029s 00:13:36.458 sys 0m0.075s 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:36.458 ************************************ 00:13:36.458 END TEST filesystem_in_capsule_ext4 00:13:36.458 ************************************ 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.458 ************************************ 00:13:36.458 START TEST filesystem_in_capsule_btrfs 00:13:36.458 ************************************ 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:36.458 17:29:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:36.458 btrfs-progs v6.8.1 00:13:36.458 See https://btrfs.readthedocs.io for more information. 00:13:36.458 00:13:36.458 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:36.458 NOTE: several default settings have changed in version 5.15, please make sure 00:13:36.458 this does not affect your deployments: 00:13:36.458 - DUP for metadata (-m dup) 00:13:36.458 - enabled no-holes (-O no-holes) 00:13:36.458 - enabled free-space-tree (-R free-space-tree) 00:13:36.458 00:13:36.458 Label: (null) 00:13:36.458 UUID: dd3be39e-9eb8-4e41-9137-df46161b3946 00:13:36.458 Node size: 16384 00:13:36.458 Sector size: 4096 (CPU page size: 4096) 00:13:36.458 Filesystem size: 510.00MiB 00:13:36.458 Block group profiles: 00:13:36.458 Data: single 8.00MiB 00:13:36.458 Metadata: DUP 32.00MiB 00:13:36.458 System: DUP 8.00MiB 00:13:36.458 SSD detected: yes 00:13:36.458 Zoned device: no 00:13:36.458 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:36.458 Checksum: crc32c 00:13:36.458 Number of devices: 1 00:13:36.458 Devices: 00:13:36.458 ID SIZE PATH 00:13:36.458 1 510.00MiB /dev/nvme0n1p1 00:13:36.458 00:13:36.458 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:36.458 17:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1596813 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:37.399 00:13:37.399 real 0m1.293s 00:13:37.399 user 0m0.022s 00:13:37.399 sys 0m0.125s 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 ************************************ 00:13:37.399 END TEST filesystem_in_capsule_btrfs 00:13:37.399 ************************************ 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 ************************************ 00:13:37.399 START TEST filesystem_in_capsule_xfs 00:13:37.399 ************************************ 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:37.399 17:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:37.399 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:37.399 = sectsz=512 attr=2, projid32bit=1 00:13:37.399 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:37.399 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:37.399 data = bsize=4096 blocks=130560, imaxpct=25 00:13:37.399 = sunit=0 swidth=0 blks 00:13:37.399 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:37.399 log =internal log bsize=4096 blocks=16384, version=2 00:13:37.399 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:37.399 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:38.339 Discarding blocks...Done. 00:13:38.339 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:38.339 17:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1596813 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.881 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.140 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.141 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.141 00:13:41.141 real 0m3.677s 00:13:41.141 user 0m0.030s 00:13:41.141 sys 0m0.075s 00:13:41.141 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.141 17:29:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:41.141 ************************************ 00:13:41.141 END TEST filesystem_in_capsule_xfs 00:13:41.141 ************************************ 00:13:41.141 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:41.141 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:41.400 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.700 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.700 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:41.700 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:41.700 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1596813 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1596813 ']' 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1596813 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596813 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596813' 00:13:41.701 killing process with pid 1596813 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1596813 00:13:41.701 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1596813 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:41.960 00:13:41.960 real 0m19.135s 00:13:41.960 user 1m15.732s 00:13:41.960 sys 0m1.399s 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.960 ************************************ 00:13:41.960 END TEST nvmf_filesystem_in_capsule 00:13:41.960 ************************************ 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.960 17:29:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.960 rmmod nvme_tcp 00:13:41.960 rmmod nvme_fabrics 00:13:41.960 rmmod nvme_keyring 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.960 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.961 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.961 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.961 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.222 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.222 17:29:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.131 00:13:44.131 real 0m46.750s 00:13:44.131 user 2m26.452s 00:13:44.131 sys 0m8.731s 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.131 ************************************ 00:13:44.131 END TEST nvmf_filesystem 00:13:44.131 ************************************ 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.131 ************************************ 00:13:44.131 START TEST nvmf_target_discovery 00:13:44.131 ************************************ 00:13:44.131 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:44.393 * Looking for test storage... 00:13:44.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.393 --rc genhtml_branch_coverage=1 00:13:44.393 --rc genhtml_function_coverage=1 00:13:44.393 --rc genhtml_legend=1 00:13:44.393 --rc geninfo_all_blocks=1 00:13:44.393 --rc geninfo_unexecuted_blocks=1 00:13:44.393 00:13:44.393 ' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.393 --rc genhtml_branch_coverage=1 00:13:44.393 --rc genhtml_function_coverage=1 00:13:44.393 --rc genhtml_legend=1 00:13:44.393 --rc geninfo_all_blocks=1 00:13:44.393 --rc geninfo_unexecuted_blocks=1 00:13:44.393 00:13:44.393 ' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.393 --rc genhtml_branch_coverage=1 00:13:44.393 --rc genhtml_function_coverage=1 00:13:44.393 --rc genhtml_legend=1 00:13:44.393 --rc geninfo_all_blocks=1 00:13:44.393 --rc geninfo_unexecuted_blocks=1 00:13:44.393 00:13:44.393 ' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.393 --rc genhtml_branch_coverage=1 00:13:44.393 --rc genhtml_function_coverage=1 00:13:44.393 --rc genhtml_legend=1 00:13:44.393 --rc geninfo_all_blocks=1 00:13:44.393 --rc geninfo_unexecuted_blocks=1 00:13:44.393 00:13:44.393 ' 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:44.393 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.394 17:29:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.538 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:52.539 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:52.539 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:52.539 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:52.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:13:52.539 00:13:52.539 --- 10.0.0.2 ping statistics --- 00:13:52.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.539 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:13:52.539 00:13:52.539 --- 10.0.0.1 ping statistics --- 00:13:52.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.539 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1605066 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1605066 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1605066 ']' 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.539 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.540 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.540 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.540 17:29:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 [2024-12-06 17:29:44.052493] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:13:52.540 [2024-12-06 17:29:44.052557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.540 [2024-12-06 17:29:44.127830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.540 [2024-12-06 17:29:44.175326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.540 [2024-12-06 17:29:44.175381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.540 [2024-12-06 17:29:44.175388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.540 [2024-12-06 17:29:44.175399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.540 [2024-12-06 17:29:44.175403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.540 [2024-12-06 17:29:44.177291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.540 [2024-12-06 17:29:44.177456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.540 [2024-12-06 17:29:44.177621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.540 [2024-12-06 17:29:44.177621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 [2024-12-06 17:29:44.340031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 Null1 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 [2024-12-06 17:29:44.409928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 Null2 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 Null3 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 Null4 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.540 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.541 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:52.804 00:13:52.804 Discovery Log Number of Records 6, Generation counter 6 00:13:52.804 =====Discovery Log Entry 0====== 00:13:52.804 trtype: tcp 00:13:52.804 adrfam: ipv4 00:13:52.804 subtype: current discovery subsystem 00:13:52.804 treq: not required 00:13:52.804 portid: 0 00:13:52.804 trsvcid: 4420 00:13:52.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:52.804 traddr: 10.0.0.2 00:13:52.804 eflags: explicit discovery connections, duplicate discovery information 00:13:52.804 sectype: none 00:13:52.804 =====Discovery Log Entry 1====== 00:13:52.804 trtype: tcp 00:13:52.804 adrfam: ipv4 00:13:52.804 subtype: nvme subsystem 00:13:52.804 treq: not required 00:13:52.804 portid: 0 00:13:52.804 trsvcid: 4420 00:13:52.804 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:52.804 traddr: 10.0.0.2 00:13:52.804 eflags: none 00:13:52.804 sectype: none 00:13:52.804 =====Discovery Log Entry 2====== 00:13:52.804 trtype: tcp 00:13:52.804 adrfam: ipv4 00:13:52.804 subtype: nvme subsystem 00:13:52.804 treq: not required 00:13:52.804 portid: 0 00:13:52.804 trsvcid: 4420 00:13:52.804 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:52.804 traddr: 10.0.0.2 00:13:52.804 eflags: none 00:13:52.804 sectype: none 00:13:52.804 =====Discovery Log Entry 3====== 00:13:52.804 trtype: tcp 00:13:52.804 adrfam: ipv4 00:13:52.804 subtype: nvme subsystem 00:13:52.804 treq: not required 00:13:52.804 portid: 0 00:13:52.804 trsvcid: 4420 00:13:52.804 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:52.804 traddr: 10.0.0.2 00:13:52.804 eflags: none 00:13:52.804 sectype: none 00:13:52.804 =====Discovery Log Entry 4====== 00:13:52.804 trtype: tcp 00:13:52.804 adrfam: ipv4 00:13:52.804 subtype: nvme subsystem 00:13:52.804 treq: not required 00:13:52.804 portid: 0 00:13:52.804 trsvcid: 4420 00:13:52.804 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:52.804 traddr: 10.0.0.2 00:13:52.804 eflags: none 00:13:52.804 sectype: none 00:13:52.804 =====Discovery Log Entry 5====== 00:13:52.804 trtype: tcp 00:13:52.804 adrfam: ipv4 00:13:52.804 subtype: discovery subsystem referral 00:13:52.804 treq: not required 00:13:52.804 portid: 0 00:13:52.804 trsvcid: 4430 00:13:52.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:52.804 traddr: 10.0.0.2 00:13:52.804 eflags: none 00:13:52.804 sectype: none 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:52.804 Perform nvmf subsystem discovery via RPC 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.804 [ 00:13:52.804 { 00:13:52.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:52.804 "subtype": "Discovery", 00:13:52.804 "listen_addresses": [ 00:13:52.804 { 00:13:52.804 "trtype": "TCP", 00:13:52.804 "adrfam": "IPv4", 00:13:52.804 "traddr": "10.0.0.2", 00:13:52.804 "trsvcid": "4420" 00:13:52.804 } 00:13:52.804 ], 00:13:52.804 "allow_any_host": true, 00:13:52.804 "hosts": [] 00:13:52.804 }, 00:13:52.804 { 00:13:52.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.804 "subtype": "NVMe", 00:13:52.804 "listen_addresses": [ 00:13:52.804 { 00:13:52.804 "trtype": "TCP", 00:13:52.804 "adrfam": "IPv4", 00:13:52.804 "traddr": "10.0.0.2", 00:13:52.804 "trsvcid": "4420" 00:13:52.804 } 00:13:52.804 ], 00:13:52.804 "allow_any_host": true, 00:13:52.804 "hosts": [], 00:13:52.804 "serial_number": "SPDK00000000000001", 00:13:52.804 "model_number": "SPDK bdev Controller", 00:13:52.804 "max_namespaces": 32, 00:13:52.804 "min_cntlid": 1, 00:13:52.804 "max_cntlid": 65519, 00:13:52.804 "namespaces": [ 00:13:52.804 { 00:13:52.804 "nsid": 1, 00:13:52.804 "bdev_name": "Null1", 00:13:52.804 "name": "Null1", 00:13:52.804 "nguid": "0252DCAB65B54717B08FD3D41D64D9E2", 00:13:52.804 "uuid": "0252dcab-65b5-4717-b08f-d3d41d64d9e2" 00:13:52.804 } 00:13:52.804 ] 00:13:52.804 }, 00:13:52.804 { 00:13:52.804 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:52.804 "subtype": "NVMe", 00:13:52.804 "listen_addresses": [ 00:13:52.804 { 00:13:52.804 "trtype": "TCP", 00:13:52.804 "adrfam": "IPv4", 00:13:52.804 "traddr": "10.0.0.2", 00:13:52.804 "trsvcid": "4420" 00:13:52.804 } 00:13:52.804 ], 00:13:52.804 "allow_any_host": true, 00:13:52.804 "hosts": [], 00:13:52.804 "serial_number": "SPDK00000000000002", 00:13:52.804 "model_number": "SPDK bdev Controller", 00:13:52.804 "max_namespaces": 32, 00:13:52.804 "min_cntlid": 1, 00:13:52.804 "max_cntlid": 65519, 00:13:52.804 "namespaces": [ 00:13:52.804 { 00:13:52.804 "nsid": 1, 00:13:52.804 "bdev_name": "Null2", 00:13:52.804 "name": "Null2", 00:13:52.804 "nguid": "406CBDA29A9A49CEB0D3C9456EB6CDDB", 00:13:52.804 "uuid": "406cbda2-9a9a-49ce-b0d3-c9456eb6cddb" 00:13:52.804 } 00:13:52.804 ] 00:13:52.804 }, 00:13:52.804 { 00:13:52.804 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:52.804 "subtype": "NVMe", 00:13:52.804 "listen_addresses": [ 00:13:52.804 { 00:13:52.804 "trtype": "TCP", 00:13:52.804 "adrfam": "IPv4", 00:13:52.804 "traddr": "10.0.0.2", 00:13:52.804 "trsvcid": "4420" 00:13:52.804 } 00:13:52.804 ], 00:13:52.804 "allow_any_host": true, 00:13:52.804 "hosts": [], 00:13:52.804 "serial_number": "SPDK00000000000003", 00:13:52.804 "model_number": "SPDK bdev Controller", 00:13:52.804 "max_namespaces": 32, 00:13:52.804 "min_cntlid": 1, 00:13:52.804 "max_cntlid": 65519, 00:13:52.804 "namespaces": [ 00:13:52.804 { 00:13:52.804 "nsid": 1, 00:13:52.804 "bdev_name": "Null3", 00:13:52.804 "name": "Null3", 00:13:52.804 "nguid": "72EB5D3C025D4A2BA429E5AF599235A8", 00:13:52.804 "uuid": "72eb5d3c-025d-4a2b-a429-e5af599235a8" 00:13:52.804 } 00:13:52.804 ] 00:13:52.804 }, 00:13:52.804 { 00:13:52.804 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:52.804 "subtype": "NVMe", 00:13:52.804 "listen_addresses": [ 00:13:52.804 { 00:13:52.804 "trtype": "TCP", 00:13:52.804 "adrfam": "IPv4", 00:13:52.804 "traddr": "10.0.0.2", 00:13:52.804 "trsvcid": "4420" 00:13:52.804 } 00:13:52.804 ], 00:13:52.804 "allow_any_host": true, 00:13:52.804 "hosts": [], 00:13:52.804 "serial_number": "SPDK00000000000004", 00:13:52.804 "model_number": "SPDK bdev Controller", 00:13:52.804 "max_namespaces": 32, 00:13:52.804 "min_cntlid": 1, 00:13:52.804 "max_cntlid": 65519, 00:13:52.804 "namespaces": [ 00:13:52.804 { 00:13:52.804 "nsid": 1, 00:13:52.804 "bdev_name": "Null4", 00:13:52.804 "name": "Null4", 00:13:52.804 "nguid": "C54CE73F3B4540438008AFEF25A18FD9", 00:13:52.804 "uuid": "c54ce73f-3b45-4043-8008-afef25a18fd9" 00:13:52.804 } 00:13:52.804 ] 00:13:52.804 } 00:13:52.804 ] 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:52.804 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.805 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.067 17:29:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.067 rmmod nvme_tcp 00:13:53.067 rmmod nvme_fabrics 00:13:53.067 rmmod nvme_keyring 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1605066 ']' 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1605066 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1605066 ']' 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1605066 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605066 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605066' 00:13:53.067 killing process with pid 1605066 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1605066 00:13:53.067 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1605066 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.329 17:29:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:55.877 00:13:55.877 real 0m11.170s 00:13:55.877 user 0m6.631s 00:13:55.877 sys 0m6.103s 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.877 ************************************ 00:13:55.877 END TEST nvmf_target_discovery 00:13:55.877 ************************************ 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.877 ************************************ 00:13:55.877 START TEST nvmf_referrals 00:13:55.877 ************************************ 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:55.877 * Looking for test storage... 00:13:55.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.877 --rc genhtml_branch_coverage=1 00:13:55.877 --rc genhtml_function_coverage=1 00:13:55.877 --rc genhtml_legend=1 00:13:55.877 --rc geninfo_all_blocks=1 00:13:55.877 --rc geninfo_unexecuted_blocks=1 00:13:55.877 00:13:55.877 ' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.877 --rc genhtml_branch_coverage=1 00:13:55.877 --rc genhtml_function_coverage=1 00:13:55.877 --rc genhtml_legend=1 00:13:55.877 --rc geninfo_all_blocks=1 00:13:55.877 --rc geninfo_unexecuted_blocks=1 00:13:55.877 00:13:55.877 ' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.877 --rc genhtml_branch_coverage=1 00:13:55.877 --rc genhtml_function_coverage=1 00:13:55.877 --rc genhtml_legend=1 00:13:55.877 --rc geninfo_all_blocks=1 00:13:55.877 --rc geninfo_unexecuted_blocks=1 00:13:55.877 00:13:55.877 ' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.877 --rc genhtml_branch_coverage=1 00:13:55.877 --rc genhtml_function_coverage=1 00:13:55.877 --rc genhtml_legend=1 00:13:55.877 --rc geninfo_all_blocks=1 00:13:55.877 --rc geninfo_unexecuted_blocks=1 00:13:55.877 00:13:55.877 ' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.877 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:55.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:55.878 17:29:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.022 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.023 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.023 17:29:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:14:04.023 00:14:04.023 --- 10.0.0.2 ping statistics --- 00:14:04.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.023 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:14:04.023 00:14:04.023 --- 10.0.0.1 ping statistics --- 00:14:04.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.023 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.023 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1609433 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1609433 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1609433 ']' 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.024 17:29:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.024 [2024-12-06 17:29:55.279945] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:14:04.024 [2024-12-06 17:29:55.280015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.024 [2024-12-06 17:29:55.378836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.024 [2024-12-06 17:29:55.432151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.024 [2024-12-06 17:29:55.432203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.024 [2024-12-06 17:29:55.432212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.024 [2024-12-06 17:29:55.432220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.024 [2024-12-06 17:29:55.432226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.024 [2024-12-06 17:29:55.434610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.024 [2024-12-06 17:29:55.434776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.024 [2024-12-06 17:29:55.434973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.024 [2024-12-06 17:29:55.434973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 [2024-12-06 17:29:56.157101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 [2024-12-06 17:29:56.186903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.285 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.629 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:04.906 17:29:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:05.167 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:05.427 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:05.428 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:05.428 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:05.428 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:05.428 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:05.428 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:05.689 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:05.950 17:29:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:06.211 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.473 rmmod nvme_tcp 00:14:06.473 rmmod nvme_fabrics 00:14:06.473 rmmod nvme_keyring 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1609433 ']' 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1609433 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1609433 ']' 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1609433 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.473 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609433 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609433' 00:14:06.735 killing process with pid 1609433 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1609433 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1609433 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.735 17:29:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.283 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.283 00:14:09.283 real 0m13.319s 00:14:09.283 user 0m16.109s 00:14:09.283 sys 0m6.494s 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.284 ************************************ 00:14:09.284 END TEST nvmf_referrals 00:14:09.284 ************************************ 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.284 ************************************ 00:14:09.284 START TEST nvmf_connect_disconnect 00:14:09.284 ************************************ 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:09.284 * Looking for test storage... 00:14:09.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.284 17:30:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.284 --rc genhtml_branch_coverage=1 00:14:09.284 --rc genhtml_function_coverage=1 00:14:09.284 --rc genhtml_legend=1 00:14:09.284 --rc geninfo_all_blocks=1 00:14:09.284 --rc geninfo_unexecuted_blocks=1 00:14:09.284 00:14:09.284 ' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.284 --rc genhtml_branch_coverage=1 00:14:09.284 --rc genhtml_function_coverage=1 00:14:09.284 --rc genhtml_legend=1 00:14:09.284 --rc geninfo_all_blocks=1 00:14:09.284 --rc geninfo_unexecuted_blocks=1 00:14:09.284 00:14:09.284 ' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.284 --rc genhtml_branch_coverage=1 00:14:09.284 --rc genhtml_function_coverage=1 00:14:09.284 --rc genhtml_legend=1 00:14:09.284 --rc geninfo_all_blocks=1 00:14:09.284 --rc geninfo_unexecuted_blocks=1 00:14:09.284 00:14:09.284 ' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.284 --rc genhtml_branch_coverage=1 00:14:09.284 --rc genhtml_function_coverage=1 00:14:09.284 --rc genhtml_legend=1 00:14:09.284 --rc geninfo_all_blocks=1 00:14:09.284 --rc geninfo_unexecuted_blocks=1 00:14:09.284 00:14:09.284 ' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.284 17:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:17.430 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:17.430 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:17.430 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:17.430 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:17.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:17.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:14:17.431 00:14:17.431 --- 10.0.0.2 ping statistics --- 00:14:17.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.431 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:17.431 00:14:17.431 --- 10.0.0.1 ping statistics --- 00:14:17.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.431 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1614439 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1614439 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1614439 ']' 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.431 17:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.431 [2024-12-06 17:30:08.763739] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:14:17.431 [2024-12-06 17:30:08.763806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.431 [2024-12-06 17:30:08.861161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.431 [2024-12-06 17:30:08.914916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.431 [2024-12-06 17:30:08.914965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.431 [2024-12-06 17:30:08.914974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.431 [2024-12-06 17:30:08.914981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.431 [2024-12-06 17:30:08.914987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.432 [2024-12-06 17:30:08.916948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.432 [2024-12-06 17:30:08.917109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.432 [2024-12-06 17:30:08.917273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.432 [2024-12-06 17:30:08.917274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 [2024-12-06 17:30:09.647350] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 [2024-12-06 17:30:09.727036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:17.694 17:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:21.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:36.265 rmmod nvme_tcp 00:14:36.265 rmmod nvme_fabrics 00:14:36.265 rmmod nvme_keyring 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1614439 ']' 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1614439 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1614439 ']' 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1614439 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1614439 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1614439' 00:14:36.265 killing process with pid 1614439 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1614439 00:14:36.265 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1614439 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.526 17:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:38.440 00:14:38.440 real 0m29.598s 00:14:38.440 user 1m19.528s 00:14:38.440 sys 0m7.302s 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.440 ************************************ 00:14:38.440 END TEST nvmf_connect_disconnect 00:14:38.440 ************************************ 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.440 17:30:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.702 ************************************ 00:14:38.702 START TEST nvmf_multitarget 00:14:38.702 ************************************ 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:38.702 * Looking for test storage... 00:14:38.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.702 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.703 --rc genhtml_branch_coverage=1 00:14:38.703 --rc genhtml_function_coverage=1 00:14:38.703 --rc genhtml_legend=1 00:14:38.703 --rc geninfo_all_blocks=1 00:14:38.703 --rc geninfo_unexecuted_blocks=1 00:14:38.703 00:14:38.703 ' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.703 --rc genhtml_branch_coverage=1 00:14:38.703 --rc genhtml_function_coverage=1 00:14:38.703 --rc genhtml_legend=1 00:14:38.703 --rc geninfo_all_blocks=1 00:14:38.703 --rc geninfo_unexecuted_blocks=1 00:14:38.703 00:14:38.703 ' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.703 --rc genhtml_branch_coverage=1 00:14:38.703 --rc genhtml_function_coverage=1 00:14:38.703 --rc genhtml_legend=1 00:14:38.703 --rc geninfo_all_blocks=1 00:14:38.703 --rc geninfo_unexecuted_blocks=1 00:14:38.703 00:14:38.703 ' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:38.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.703 --rc genhtml_branch_coverage=1 00:14:38.703 --rc genhtml_function_coverage=1 00:14:38.703 --rc genhtml_legend=1 00:14:38.703 --rc geninfo_all_blocks=1 00:14:38.703 --rc geninfo_unexecuted_blocks=1 00:14:38.703 00:14:38.703 ' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:38.703 17:30:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:46.879 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:46.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:46.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:46.880 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:46.880 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.880 17:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:46.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:14:46.880 00:14:46.880 --- 10.0.0.2 ping statistics --- 00:14:46.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.880 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:14:46.880 00:14:46.880 --- 10.0.0.1 ping statistics --- 00:14:46.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.880 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:14:46.880 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1622914 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1622914 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1622914 ']' 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.881 17:30:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:46.881 [2024-12-06 17:30:38.275926] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:14:46.881 [2024-12-06 17:30:38.275996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.881 [2024-12-06 17:30:38.372673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.881 [2024-12-06 17:30:38.427867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.881 [2024-12-06 17:30:38.427921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.881 [2024-12-06 17:30:38.427930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.881 [2024-12-06 17:30:38.427937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.881 [2024-12-06 17:30:38.427944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.881 [2024-12-06 17:30:38.429961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.881 [2024-12-06 17:30:38.430122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.881 [2024-12-06 17:30:38.430288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.881 [2024-12-06 17:30:38.430288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:47.141 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:47.401 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:47.401 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:47.401 "nvmf_tgt_1" 00:14:47.401 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:47.660 "nvmf_tgt_2" 00:14:47.660 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:47.660 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:47.660 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:47.660 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:47.660 true 00:14:47.660 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:47.920 true 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.920 17:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.920 rmmod nvme_tcp 00:14:47.920 rmmod nvme_fabrics 00:14:48.180 rmmod nvme_keyring 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1622914 ']' 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1622914 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1622914 ']' 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1622914 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1622914 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1622914' 00:14:48.180 killing process with pid 1622914 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1622914 00:14:48.180 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1622914 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.439 17:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.354 00:14:50.354 real 0m11.831s 00:14:50.354 user 0m10.307s 00:14:50.354 sys 0m6.131s 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.354 ************************************ 00:14:50.354 END TEST nvmf_multitarget 00:14:50.354 ************************************ 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.354 17:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.616 ************************************ 00:14:50.616 START TEST nvmf_rpc 00:14:50.616 ************************************ 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:50.616 * Looking for test storage... 00:14:50.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.616 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.617 --rc genhtml_branch_coverage=1 00:14:50.617 --rc genhtml_function_coverage=1 00:14:50.617 --rc genhtml_legend=1 00:14:50.617 --rc geninfo_all_blocks=1 00:14:50.617 --rc geninfo_unexecuted_blocks=1 00:14:50.617 00:14:50.617 ' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.617 --rc genhtml_branch_coverage=1 00:14:50.617 --rc genhtml_function_coverage=1 00:14:50.617 --rc genhtml_legend=1 00:14:50.617 --rc geninfo_all_blocks=1 00:14:50.617 --rc geninfo_unexecuted_blocks=1 00:14:50.617 00:14:50.617 ' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.617 --rc genhtml_branch_coverage=1 00:14:50.617 --rc genhtml_function_coverage=1 00:14:50.617 --rc genhtml_legend=1 00:14:50.617 --rc geninfo_all_blocks=1 00:14:50.617 --rc geninfo_unexecuted_blocks=1 00:14:50.617 00:14:50.617 ' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:50.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.617 --rc genhtml_branch_coverage=1 00:14:50.617 --rc genhtml_function_coverage=1 00:14:50.617 --rc genhtml_legend=1 00:14:50.617 --rc geninfo_all_blocks=1 00:14:50.617 --rc geninfo_unexecuted_blocks=1 00:14:50.617 00:14:50.617 ' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.617 17:30:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:58.767 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:58.767 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:58.767 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:58.767 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.767 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.768 17:30:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:14:58.768 00:14:58.768 --- 10.0.0.2 ping statistics --- 00:14:58.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.768 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:14:58.768 00:14:58.768 --- 10.0.0.1 ping statistics --- 00:14:58.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.768 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1627614 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1627614 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1627614 ']' 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.768 17:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.768 [2024-12-06 17:30:50.234974] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:14:58.768 [2024-12-06 17:30:50.235041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.768 [2024-12-06 17:30:50.336619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.768 [2024-12-06 17:30:50.389225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.768 [2024-12-06 17:30:50.389279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.768 [2024-12-06 17:30:50.389288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.768 [2024-12-06 17:30:50.389296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.768 [2024-12-06 17:30:50.389303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.768 [2024-12-06 17:30:50.391339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.768 [2024-12-06 17:30:50.391487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.768 [2024-12-06 17:30:50.391635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.768 [2024-12-06 17:30:50.391635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.029 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.029 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:59.029 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:59.029 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.029 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:59.291 "tick_rate": 2400000000, 00:14:59.291 "poll_groups": [ 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_000", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [] 00:14:59.291 }, 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_001", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [] 00:14:59.291 }, 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_002", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [] 00:14:59.291 }, 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_003", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [] 00:14:59.291 } 00:14:59.291 ] 00:14:59.291 }' 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.291 [2024-12-06 17:30:51.229985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.291 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:59.291 "tick_rate": 2400000000, 00:14:59.291 "poll_groups": [ 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_000", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [ 00:14:59.291 { 00:14:59.291 "trtype": "TCP" 00:14:59.291 } 00:14:59.291 ] 00:14:59.291 }, 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_001", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [ 00:14:59.291 { 00:14:59.291 "trtype": "TCP" 00:14:59.291 } 00:14:59.291 ] 00:14:59.291 }, 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_002", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.291 "current_io_qpairs": 0, 00:14:59.291 "pending_bdev_io": 0, 00:14:59.291 "completed_nvme_io": 0, 00:14:59.291 "transports": [ 00:14:59.291 { 00:14:59.291 "trtype": "TCP" 00:14:59.291 } 00:14:59.291 ] 00:14:59.291 }, 00:14:59.291 { 00:14:59.291 "name": "nvmf_tgt_poll_group_003", 00:14:59.291 "admin_qpairs": 0, 00:14:59.291 "io_qpairs": 0, 00:14:59.291 "current_admin_qpairs": 0, 00:14:59.292 "current_io_qpairs": 0, 00:14:59.292 "pending_bdev_io": 0, 00:14:59.292 "completed_nvme_io": 0, 00:14:59.292 "transports": [ 00:14:59.292 { 00:14:59.292 "trtype": "TCP" 00:14:59.292 } 00:14:59.292 ] 00:14:59.292 } 00:14:59.292 ] 00:14:59.292 }' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.292 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.554 Malloc1 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.554 [2024-12-06 17:30:51.446311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:59.554 [2024-12-06 17:30:51.483403] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:59.554 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:59.554 could not add new controller: failed to write to nvme-fabrics device 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.554 17:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.944 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.944 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:00.944 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.944 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:00.944 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:03.493 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.493 [2024-12-06 17:30:55.209766] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:03.493 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:03.494 could not add new controller: failed to write to nvme-fabrics device 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.494 17:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:04.876 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.876 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:04.876 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.876 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:04.876 17:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:06.794 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.054 [2024-12-06 17:30:58.918892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.054 17:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.963 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.963 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:08.963 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.963 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:08.963 17:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 [2024-12-06 17:31:02.668959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.874 17:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.260 17:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.260 17:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:12.260 17:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.260 17:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:12.260 17:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 [2024-12-06 17:31:06.425755] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 17:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:16.190 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.190 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:16.190 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.190 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:16.190 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 [2024-12-06 17:31:10.222397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.208 17:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:20.140 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.140 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:20.140 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.140 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:20.140 17:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.052 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.052 [2024-12-06 17:31:14.024704] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.052 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.963 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.963 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:23.963 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.963 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:23.963 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 [2024-12-06 17:31:17.736583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.877 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 [2024-12-06 17:31:17.796728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 [2024-12-06 17:31:17.864922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 [2024-12-06 17:31:17.937167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.139 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.139 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:26.139 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.139 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.139 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 [2024-12-06 17:31:18.005380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:26.140 "tick_rate": 2400000000, 00:15:26.140 "poll_groups": [ 00:15:26.140 { 00:15:26.140 "name": "nvmf_tgt_poll_group_000", 00:15:26.140 "admin_qpairs": 0, 00:15:26.140 "io_qpairs": 224, 00:15:26.140 "current_admin_qpairs": 0, 00:15:26.140 "current_io_qpairs": 0, 00:15:26.140 "pending_bdev_io": 0, 00:15:26.140 "completed_nvme_io": 275, 00:15:26.140 "transports": [ 00:15:26.140 { 00:15:26.140 "trtype": "TCP" 00:15:26.140 } 00:15:26.140 ] 00:15:26.140 }, 00:15:26.140 { 00:15:26.140 "name": "nvmf_tgt_poll_group_001", 00:15:26.140 "admin_qpairs": 1, 00:15:26.140 "io_qpairs": 223, 00:15:26.140 "current_admin_qpairs": 0, 00:15:26.140 "current_io_qpairs": 0, 00:15:26.140 "pending_bdev_io": 0, 00:15:26.140 "completed_nvme_io": 459, 00:15:26.140 "transports": [ 00:15:26.140 { 00:15:26.140 "trtype": "TCP" 00:15:26.140 } 00:15:26.140 ] 00:15:26.140 }, 00:15:26.140 { 00:15:26.140 "name": "nvmf_tgt_poll_group_002", 00:15:26.140 "admin_qpairs": 6, 00:15:26.140 "io_qpairs": 218, 00:15:26.140 "current_admin_qpairs": 0, 00:15:26.140 "current_io_qpairs": 0, 00:15:26.140 "pending_bdev_io": 0, 00:15:26.140 "completed_nvme_io": 221, 00:15:26.140 "transports": [ 00:15:26.140 { 00:15:26.140 "trtype": "TCP" 00:15:26.140 } 00:15:26.140 ] 00:15:26.140 }, 00:15:26.140 { 00:15:26.140 "name": "nvmf_tgt_poll_group_003", 00:15:26.140 "admin_qpairs": 0, 00:15:26.140 "io_qpairs": 224, 00:15:26.140 "current_admin_qpairs": 0, 00:15:26.140 "current_io_qpairs": 0, 00:15:26.140 "pending_bdev_io": 0, 00:15:26.140 "completed_nvme_io": 284, 00:15:26.140 "transports": [ 00:15:26.140 { 00:15:26.140 "trtype": "TCP" 00:15:26.140 } 00:15:26.140 ] 00:15:26.140 } 00:15:26.140 ] 00:15:26.140 }' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.140 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.140 rmmod nvme_tcp 00:15:26.140 rmmod nvme_fabrics 00:15:26.402 rmmod nvme_keyring 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1627614 ']' 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1627614 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1627614 ']' 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1627614 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1627614 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1627614' 00:15:26.402 killing process with pid 1627614 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1627614 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1627614 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.402 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:28.944 00:15:28.944 real 0m38.073s 00:15:28.944 user 1m54.066s 00:15:28.944 sys 0m7.895s 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.944 ************************************ 00:15:28.944 END TEST nvmf_rpc 00:15:28.944 ************************************ 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.944 ************************************ 00:15:28.944 START TEST nvmf_invalid 00:15:28.944 ************************************ 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:28.944 * Looking for test storage... 00:15:28.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.944 --rc genhtml_branch_coverage=1 00:15:28.944 --rc genhtml_function_coverage=1 00:15:28.944 --rc genhtml_legend=1 00:15:28.944 --rc geninfo_all_blocks=1 00:15:28.944 --rc geninfo_unexecuted_blocks=1 00:15:28.944 00:15:28.944 ' 00:15:28.944 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.945 --rc genhtml_branch_coverage=1 00:15:28.945 --rc genhtml_function_coverage=1 00:15:28.945 --rc genhtml_legend=1 00:15:28.945 --rc geninfo_all_blocks=1 00:15:28.945 --rc geninfo_unexecuted_blocks=1 00:15:28.945 00:15:28.945 ' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:28.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.945 --rc genhtml_branch_coverage=1 00:15:28.945 --rc genhtml_function_coverage=1 00:15:28.945 --rc genhtml_legend=1 00:15:28.945 --rc geninfo_all_blocks=1 00:15:28.945 --rc geninfo_unexecuted_blocks=1 00:15:28.945 00:15:28.945 ' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:28.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.945 --rc genhtml_branch_coverage=1 00:15:28.945 --rc genhtml_function_coverage=1 00:15:28.945 --rc genhtml_legend=1 00:15:28.945 --rc geninfo_all_blocks=1 00:15:28.945 --rc geninfo_unexecuted_blocks=1 00:15:28.945 00:15:28.945 ' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:28.945 17:31:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:37.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:37.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:37.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:37.089 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:37.089 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.090 17:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:37.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:15:37.090 00:15:37.090 --- 10.0.0.2 ping statistics --- 00:15:37.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.090 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:15:37.090 00:15:37.090 --- 10.0.0.1 ping statistics --- 00:15:37.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.090 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1637360 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1637360 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1637360 ']' 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.090 17:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:37.090 [2024-12-06 17:31:28.377292] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:15:37.090 [2024-12-06 17:31:28.377366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.090 [2024-12-06 17:31:28.476566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.090 [2024-12-06 17:31:28.530232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.090 [2024-12-06 17:31:28.530288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.090 [2024-12-06 17:31:28.530297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.090 [2024-12-06 17:31:28.530304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.090 [2024-12-06 17:31:28.530311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.090 [2024-12-06 17:31:28.532342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.090 [2024-12-06 17:31:28.532504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.090 [2024-12-06 17:31:28.532686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.090 [2024-12-06 17:31:28.532687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:37.351 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20298 00:15:37.351 [2024-12-06 17:31:29.415173] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:37.612 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:37.612 { 00:15:37.612 "nqn": "nqn.2016-06.io.spdk:cnode20298", 00:15:37.612 "tgt_name": "foobar", 00:15:37.612 "method": "nvmf_create_subsystem", 00:15:37.612 "req_id": 1 00:15:37.612 } 00:15:37.612 Got JSON-RPC error response 00:15:37.612 response: 00:15:37.612 { 00:15:37.612 "code": -32603, 00:15:37.612 "message": "Unable to find target foobar" 00:15:37.612 }' 00:15:37.612 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:37.612 { 00:15:37.613 "nqn": "nqn.2016-06.io.spdk:cnode20298", 00:15:37.613 "tgt_name": "foobar", 00:15:37.613 "method": "nvmf_create_subsystem", 00:15:37.613 "req_id": 1 00:15:37.613 } 00:15:37.613 Got JSON-RPC error response 00:15:37.613 response: 00:15:37.613 { 00:15:37.613 "code": -32603, 00:15:37.613 "message": "Unable to find target foobar" 00:15:37.613 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:37.613 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:37.613 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6226 00:15:37.613 [2024-12-06 17:31:29.620077] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6226: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:37.613 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:37.613 { 00:15:37.613 "nqn": "nqn.2016-06.io.spdk:cnode6226", 00:15:37.613 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:37.613 "method": "nvmf_create_subsystem", 00:15:37.613 "req_id": 1 00:15:37.613 } 00:15:37.613 Got JSON-RPC error response 00:15:37.613 response: 00:15:37.613 { 00:15:37.613 "code": -32602, 00:15:37.613 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:37.613 }' 00:15:37.613 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:37.613 { 00:15:37.613 "nqn": "nqn.2016-06.io.spdk:cnode6226", 00:15:37.613 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:37.613 "method": "nvmf_create_subsystem", 00:15:37.613 "req_id": 1 00:15:37.613 } 00:15:37.613 Got JSON-RPC error response 00:15:37.613 response: 00:15:37.613 { 00:15:37.613 "code": -32602, 00:15:37.613 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:37.613 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:37.613 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:37.613 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26944 00:15:37.875 [2024-12-06 17:31:29.824791] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26944: invalid model number 'SPDK_Controller' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:37.875 { 00:15:37.875 "nqn": "nqn.2016-06.io.spdk:cnode26944", 00:15:37.875 "model_number": "SPDK_Controller\u001f", 00:15:37.875 "method": "nvmf_create_subsystem", 00:15:37.875 "req_id": 1 00:15:37.875 } 00:15:37.875 Got JSON-RPC error response 00:15:37.875 response: 00:15:37.875 { 00:15:37.875 "code": -32602, 00:15:37.875 "message": "Invalid MN SPDK_Controller\u001f" 00:15:37.875 }' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:37.875 { 00:15:37.875 "nqn": "nqn.2016-06.io.spdk:cnode26944", 00:15:37.875 "model_number": "SPDK_Controller\u001f", 00:15:37.875 "method": "nvmf_create_subsystem", 00:15:37.875 "req_id": 1 00:15:37.875 } 00:15:37.875 Got JSON-RPC error response 00:15:37.875 response: 00:15:37.875 { 00:15:37.875 "code": -32602, 00:15:37.875 "message": "Invalid MN SPDK_Controller\u001f" 00:15:37.875 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:37.875 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:38.137 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ':a:j_H{>bt,%4r/8A}`k_' 00:15:38.138 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ':a:j_H{>bt,%4r/8A}`k_' nqn.2016-06.io.spdk:cnode1881 00:15:38.402 [2024-12-06 17:31:30.206240] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1881: invalid serial number ':a:j_H{>bt,%4r/8A}`k_' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:38.402 { 00:15:38.402 "nqn": "nqn.2016-06.io.spdk:cnode1881", 00:15:38.402 "serial_number": ":a:j_H{>bt,%4r/8A}`k_", 00:15:38.402 "method": "nvmf_create_subsystem", 00:15:38.402 "req_id": 1 00:15:38.402 } 00:15:38.402 Got JSON-RPC error response 00:15:38.402 response: 00:15:38.402 { 00:15:38.402 "code": -32602, 00:15:38.402 "message": "Invalid SN :a:j_H{>bt,%4r/8A}`k_" 00:15:38.402 }' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:38.402 { 00:15:38.402 "nqn": "nqn.2016-06.io.spdk:cnode1881", 00:15:38.402 "serial_number": ":a:j_H{>bt,%4r/8A}`k_", 00:15:38.402 "method": "nvmf_create_subsystem", 00:15:38.402 "req_id": 1 00:15:38.402 } 00:15:38.402 Got JSON-RPC error response 00:15:38.402 response: 00:15:38.402 { 00:15:38.402 "code": -32602, 00:15:38.402 "message": "Invalid SN :a:j_H{>bt,%4r/8A}`k_" 00:15:38.402 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:38.402 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.403 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:38.667 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:38.668 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:38.668 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:38.668 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:38.668 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:15:38.668 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8' 00:15:38.668 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '*oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8' nqn.2016-06.io.spdk:cnode21999 00:15:38.930 [2024-12-06 17:31:30.752310] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21999: invalid model number '*oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8' 00:15:38.930 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:38.930 { 00:15:38.930 "nqn": "nqn.2016-06.io.spdk:cnode21999", 00:15:38.930 "model_number": "*oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8", 00:15:38.930 "method": "nvmf_create_subsystem", 00:15:38.930 "req_id": 1 00:15:38.930 } 00:15:38.930 Got JSON-RPC error response 00:15:38.930 response: 00:15:38.930 { 00:15:38.930 "code": -32602, 00:15:38.930 "message": "Invalid MN *oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8" 00:15:38.930 }' 00:15:38.930 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:38.930 { 00:15:38.930 "nqn": "nqn.2016-06.io.spdk:cnode21999", 00:15:38.930 "model_number": "*oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8", 00:15:38.930 "method": "nvmf_create_subsystem", 00:15:38.930 "req_id": 1 00:15:38.930 } 00:15:38.930 Got JSON-RPC error response 00:15:38.930 response: 00:15:38.930 { 00:15:38.930 "code": -32602, 00:15:38.930 "message": "Invalid MN *oEo>1_]x+P$4BRF] 6bo.C!YQT ,8?J2R,FEX>x8" 00:15:38.930 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:38.930 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:38.930 [2024-12-06 17:31:30.953208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.930 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:39.191 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:39.191 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:39.191 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:39.191 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:39.191 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:39.452 [2024-12-06 17:31:31.350530] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:39.452 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:39.452 { 00:15:39.452 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:39.452 "listen_address": { 00:15:39.452 "trtype": "tcp", 00:15:39.452 "traddr": "", 00:15:39.452 "trsvcid": "4421" 00:15:39.452 }, 00:15:39.452 "method": "nvmf_subsystem_remove_listener", 00:15:39.452 "req_id": 1 00:15:39.452 } 00:15:39.452 Got JSON-RPC error response 00:15:39.452 response: 00:15:39.452 { 00:15:39.452 "code": -32602, 00:15:39.452 "message": "Invalid parameters" 00:15:39.452 }' 00:15:39.452 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:39.452 { 00:15:39.452 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:39.452 "listen_address": { 00:15:39.453 "trtype": "tcp", 00:15:39.453 "traddr": "", 00:15:39.453 "trsvcid": "4421" 00:15:39.453 }, 00:15:39.453 "method": "nvmf_subsystem_remove_listener", 00:15:39.453 "req_id": 1 00:15:39.453 } 00:15:39.453 Got JSON-RPC error response 00:15:39.453 response: 00:15:39.453 { 00:15:39.453 "code": -32602, 00:15:39.453 "message": "Invalid parameters" 00:15:39.453 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:39.453 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13053 -i 0 00:15:39.714 [2024-12-06 17:31:31.539098] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13053: invalid cntlid range [0-65519] 00:15:39.714 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:39.714 { 00:15:39.714 "nqn": "nqn.2016-06.io.spdk:cnode13053", 00:15:39.714 "min_cntlid": 0, 00:15:39.714 "method": "nvmf_create_subsystem", 00:15:39.714 "req_id": 1 00:15:39.714 } 00:15:39.714 Got JSON-RPC error response 00:15:39.714 response: 00:15:39.714 { 00:15:39.714 "code": -32602, 00:15:39.714 "message": "Invalid cntlid range [0-65519]" 00:15:39.714 }' 00:15:39.714 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:39.714 { 00:15:39.714 "nqn": "nqn.2016-06.io.spdk:cnode13053", 00:15:39.714 "min_cntlid": 0, 00:15:39.714 "method": "nvmf_create_subsystem", 00:15:39.714 "req_id": 1 00:15:39.714 } 00:15:39.714 Got JSON-RPC error response 00:15:39.714 response: 00:15:39.714 { 00:15:39.714 "code": -32602, 00:15:39.714 "message": "Invalid cntlid range [0-65519]" 00:15:39.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.714 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9941 -i 65520 00:15:39.714 [2024-12-06 17:31:31.727766] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9941: invalid cntlid range [65520-65519] 00:15:39.714 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:39.714 { 00:15:39.714 "nqn": "nqn.2016-06.io.spdk:cnode9941", 00:15:39.714 "min_cntlid": 65520, 00:15:39.714 "method": "nvmf_create_subsystem", 00:15:39.714 "req_id": 1 00:15:39.714 } 00:15:39.714 Got JSON-RPC error response 00:15:39.714 response: 00:15:39.714 { 00:15:39.714 "code": -32602, 00:15:39.714 "message": "Invalid cntlid range [65520-65519]" 00:15:39.714 }' 00:15:39.714 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:39.714 { 00:15:39.714 "nqn": "nqn.2016-06.io.spdk:cnode9941", 00:15:39.714 "min_cntlid": 65520, 00:15:39.714 "method": "nvmf_create_subsystem", 00:15:39.714 "req_id": 1 00:15:39.714 } 00:15:39.714 Got JSON-RPC error response 00:15:39.714 response: 00:15:39.714 { 00:15:39.714 "code": -32602, 00:15:39.714 "message": "Invalid cntlid range [65520-65519]" 00:15:39.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.714 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26947 -I 0 00:15:39.976 [2024-12-06 17:31:31.916347] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26947: invalid cntlid range [1-0] 00:15:39.976 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:39.976 { 00:15:39.976 "nqn": "nqn.2016-06.io.spdk:cnode26947", 00:15:39.976 "max_cntlid": 0, 00:15:39.976 "method": "nvmf_create_subsystem", 00:15:39.976 "req_id": 1 00:15:39.976 } 00:15:39.976 Got JSON-RPC error response 00:15:39.976 response: 00:15:39.976 { 00:15:39.976 "code": -32602, 00:15:39.976 "message": "Invalid cntlid range [1-0]" 00:15:39.976 }' 00:15:39.976 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:39.976 { 00:15:39.976 "nqn": "nqn.2016-06.io.spdk:cnode26947", 00:15:39.976 "max_cntlid": 0, 00:15:39.976 "method": "nvmf_create_subsystem", 00:15:39.976 "req_id": 1 00:15:39.976 } 00:15:39.976 Got JSON-RPC error response 00:15:39.976 response: 00:15:39.976 { 00:15:39.976 "code": -32602, 00:15:39.976 "message": "Invalid cntlid range [1-0]" 00:15:39.976 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:39.976 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23274 -I 65520 00:15:40.237 [2024-12-06 17:31:32.104955] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23274: invalid cntlid range [1-65520] 00:15:40.237 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:40.237 { 00:15:40.237 "nqn": "nqn.2016-06.io.spdk:cnode23274", 00:15:40.237 "max_cntlid": 65520, 00:15:40.237 "method": "nvmf_create_subsystem", 00:15:40.237 "req_id": 1 00:15:40.237 } 00:15:40.237 Got JSON-RPC error response 00:15:40.237 response: 00:15:40.237 { 00:15:40.237 "code": -32602, 00:15:40.237 "message": "Invalid cntlid range [1-65520]" 00:15:40.237 }' 00:15:40.237 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:40.237 { 00:15:40.237 "nqn": "nqn.2016-06.io.spdk:cnode23274", 00:15:40.237 "max_cntlid": 65520, 00:15:40.237 "method": "nvmf_create_subsystem", 00:15:40.237 "req_id": 1 00:15:40.237 } 00:15:40.237 Got JSON-RPC error response 00:15:40.237 response: 00:15:40.237 { 00:15:40.237 "code": -32602, 00:15:40.237 "message": "Invalid cntlid range [1-65520]" 00:15:40.237 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:40.237 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8427 -i 6 -I 5 00:15:40.237 [2024-12-06 17:31:32.293598] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8427: invalid cntlid range [6-5] 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:40.499 { 00:15:40.499 "nqn": "nqn.2016-06.io.spdk:cnode8427", 00:15:40.499 "min_cntlid": 6, 00:15:40.499 "max_cntlid": 5, 00:15:40.499 "method": "nvmf_create_subsystem", 00:15:40.499 "req_id": 1 00:15:40.499 } 00:15:40.499 Got JSON-RPC error response 00:15:40.499 response: 00:15:40.499 { 00:15:40.499 "code": -32602, 00:15:40.499 "message": "Invalid cntlid range [6-5]" 00:15:40.499 }' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:40.499 { 00:15:40.499 "nqn": "nqn.2016-06.io.spdk:cnode8427", 00:15:40.499 "min_cntlid": 6, 00:15:40.499 "max_cntlid": 5, 00:15:40.499 "method": "nvmf_create_subsystem", 00:15:40.499 "req_id": 1 00:15:40.499 } 00:15:40.499 Got JSON-RPC error response 00:15:40.499 response: 00:15:40.499 { 00:15:40.499 "code": -32602, 00:15:40.499 "message": "Invalid cntlid range [6-5]" 00:15:40.499 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:40.499 { 00:15:40.499 "name": "foobar", 00:15:40.499 "method": "nvmf_delete_target", 00:15:40.499 "req_id": 1 00:15:40.499 } 00:15:40.499 Got JSON-RPC error response 00:15:40.499 response: 00:15:40.499 { 00:15:40.499 "code": -32602, 00:15:40.499 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:40.499 }' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:40.499 { 00:15:40.499 "name": "foobar", 00:15:40.499 "method": "nvmf_delete_target", 00:15:40.499 "req_id": 1 00:15:40.499 } 00:15:40.499 Got JSON-RPC error response 00:15:40.499 response: 00:15:40.499 { 00:15:40.499 "code": -32602, 00:15:40.499 "message": "The specified target doesn't exist, cannot delete it." 00:15:40.499 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.499 rmmod nvme_tcp 00:15:40.499 rmmod nvme_fabrics 00:15:40.499 rmmod nvme_keyring 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1637360 ']' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1637360 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1637360 ']' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1637360 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1637360 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1637360' 00:15:40.499 killing process with pid 1637360 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1637360 00:15:40.499 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1637360 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.760 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:43.310 00:15:43.310 real 0m14.174s 00:15:43.310 user 0m21.273s 00:15:43.310 sys 0m6.665s 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:43.310 ************************************ 00:15:43.310 END TEST nvmf_invalid 00:15:43.310 ************************************ 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.310 ************************************ 00:15:43.310 START TEST nvmf_connect_stress 00:15:43.310 ************************************ 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:43.310 * Looking for test storage... 00:15:43.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:15:43.310 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:43.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.310 --rc genhtml_branch_coverage=1 00:15:43.310 --rc genhtml_function_coverage=1 00:15:43.310 --rc genhtml_legend=1 00:15:43.310 --rc geninfo_all_blocks=1 00:15:43.310 --rc geninfo_unexecuted_blocks=1 00:15:43.310 00:15:43.310 ' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:43.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.310 --rc genhtml_branch_coverage=1 00:15:43.310 --rc genhtml_function_coverage=1 00:15:43.310 --rc genhtml_legend=1 00:15:43.310 --rc geninfo_all_blocks=1 00:15:43.310 --rc geninfo_unexecuted_blocks=1 00:15:43.310 00:15:43.310 ' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:43.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.310 --rc genhtml_branch_coverage=1 00:15:43.310 --rc genhtml_function_coverage=1 00:15:43.310 --rc genhtml_legend=1 00:15:43.310 --rc geninfo_all_blocks=1 00:15:43.310 --rc geninfo_unexecuted_blocks=1 00:15:43.310 00:15:43.310 ' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:43.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.310 --rc genhtml_branch_coverage=1 00:15:43.310 --rc genhtml_function_coverage=1 00:15:43.310 --rc genhtml_legend=1 00:15:43.310 --rc geninfo_all_blocks=1 00:15:43.310 --rc geninfo_unexecuted_blocks=1 00:15:43.310 00:15:43.310 ' 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.310 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:43.311 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:51.454 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:51.455 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:51.455 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:51.455 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:51.455 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:51.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:15:51.455 00:15:51.455 --- 10.0.0.2 ping statistics --- 00:15:51.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.455 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:15:51.455 00:15:51.455 --- 10.0.0.1 ping statistics --- 00:15:51.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.455 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1642518 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1642518 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1642518 ']' 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.455 17:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.455 [2024-12-06 17:31:42.625356] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:15:51.455 [2024-12-06 17:31:42.625427] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.455 [2024-12-06 17:31:42.723750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.455 [2024-12-06 17:31:42.775352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.455 [2024-12-06 17:31:42.775403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.455 [2024-12-06 17:31:42.775412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.455 [2024-12-06 17:31:42.775420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.455 [2024-12-06 17:31:42.775426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.455 [2024-12-06 17:31:42.777302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.455 [2024-12-06 17:31:42.777463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.455 [2024-12-06 17:31:42.777463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.455 [2024-12-06 17:31:43.506322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.455 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.717 [2024-12-06 17:31:43.531943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.717 NULL1 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1642682 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.717 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.979 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.979 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:51.979 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.979 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.979 17:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.553 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.553 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:52.553 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.553 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.553 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.814 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.814 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:52.814 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.814 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.814 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.075 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:53.075 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.075 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.075 17:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.336 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.336 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:53.336 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.336 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.336 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.596 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.596 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:53.596 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.596 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.596 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.166 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.166 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:54.166 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.166 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.166 17:31:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.426 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.426 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:54.426 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.426 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.426 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.687 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.687 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:54.687 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.687 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.687 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.947 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.947 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:54.947 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.947 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.947 17:31:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.207 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.207 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:55.207 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.207 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.207 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.778 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:55.778 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.778 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.778 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.039 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.039 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:56.039 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.039 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.039 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.298 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.298 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:56.298 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.298 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.298 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.558 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.558 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:56.558 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.558 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.558 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.819 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.819 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:56.819 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.819 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.819 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.390 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.390 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:57.390 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.390 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.390 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.651 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.651 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:57.651 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.651 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.651 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.912 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.912 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:57.912 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.912 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.912 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.173 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.173 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:58.173 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.173 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.173 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.433 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.433 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:58.433 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.433 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.433 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.003 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.003 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:59.003 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.003 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.003 17:31:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.265 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.265 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:59.265 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.265 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.265 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.525 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.525 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:59.525 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.525 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.525 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.787 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.787 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:15:59.787 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.787 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.787 17:31:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.358 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.358 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:16:00.358 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.358 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.358 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.619 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.619 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:16:00.619 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.619 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.619 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.880 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.880 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:16:00.880 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.880 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.880 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.140 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.140 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:16:01.140 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.140 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.140 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.401 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:16:01.401 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.401 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.401 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.661 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:01.921 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.921 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1642682 00:16:01.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1642682) - No such process 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1642682 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.922 rmmod nvme_tcp 00:16:01.922 rmmod nvme_fabrics 00:16:01.922 rmmod nvme_keyring 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1642518 ']' 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1642518 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1642518 ']' 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1642518 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1642518 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1642518' 00:16:01.922 killing process with pid 1642518 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1642518 00:16:01.922 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1642518 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.182 17:31:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:04.094 00:16:04.094 real 0m21.255s 00:16:04.094 user 0m42.264s 00:16:04.094 sys 0m9.327s 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.094 ************************************ 00:16:04.094 END TEST nvmf_connect_stress 00:16:04.094 ************************************ 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.094 17:31:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.355 ************************************ 00:16:04.355 START TEST nvmf_fused_ordering 00:16:04.355 ************************************ 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:04.355 * Looking for test storage... 00:16:04.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.355 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.356 --rc genhtml_branch_coverage=1 00:16:04.356 --rc genhtml_function_coverage=1 00:16:04.356 --rc genhtml_legend=1 00:16:04.356 --rc geninfo_all_blocks=1 00:16:04.356 --rc geninfo_unexecuted_blocks=1 00:16:04.356 00:16:04.356 ' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.356 --rc genhtml_branch_coverage=1 00:16:04.356 --rc genhtml_function_coverage=1 00:16:04.356 --rc genhtml_legend=1 00:16:04.356 --rc geninfo_all_blocks=1 00:16:04.356 --rc geninfo_unexecuted_blocks=1 00:16:04.356 00:16:04.356 ' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.356 --rc genhtml_branch_coverage=1 00:16:04.356 --rc genhtml_function_coverage=1 00:16:04.356 --rc genhtml_legend=1 00:16:04.356 --rc geninfo_all_blocks=1 00:16:04.356 --rc geninfo_unexecuted_blocks=1 00:16:04.356 00:16:04.356 ' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.356 --rc genhtml_branch_coverage=1 00:16:04.356 --rc genhtml_function_coverage=1 00:16:04.356 --rc genhtml_legend=1 00:16:04.356 --rc geninfo_all_blocks=1 00:16:04.356 --rc geninfo_unexecuted_blocks=1 00:16:04.356 00:16:04.356 ' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:04.356 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.357 17:31:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:12.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:12.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:12.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:12.498 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:12.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:12.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:16:12.499 00:16:12.499 --- 10.0.0.2 ping statistics --- 00:16:12.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.499 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:16:12.499 00:16:12.499 --- 10.0.0.1 ping statistics --- 00:16:12.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.499 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1647470 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1647470 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1647470 ']' 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.499 17:32:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.499 [2024-12-06 17:32:03.911712] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:16:12.499 [2024-12-06 17:32:03.911775] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.499 [2024-12-06 17:32:04.011553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.499 [2024-12-06 17:32:04.061210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.499 [2024-12-06 17:32:04.061268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.499 [2024-12-06 17:32:04.061277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.499 [2024-12-06 17:32:04.061284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.499 [2024-12-06 17:32:04.061291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.499 [2024-12-06 17:32:04.062093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.761 [2024-12-06 17:32:04.793238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.761 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.762 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:12.762 [2024-12-06 17:32:04.817521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.762 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.762 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:12.762 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.762 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:13.023 NULL1 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.023 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:13.023 [2024-12-06 17:32:04.886579] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:16:13.023 [2024-12-06 17:32:04.886626] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647504 ] 00:16:13.595 Attached to nqn.2016-06.io.spdk:cnode1 00:16:13.595 Namespace ID: 1 size: 1GB 00:16:13.595 fused_ordering(0) 00:16:13.595 fused_ordering(1) 00:16:13.595 fused_ordering(2) 00:16:13.595 fused_ordering(3) 00:16:13.595 fused_ordering(4) 00:16:13.595 fused_ordering(5) 00:16:13.595 fused_ordering(6) 00:16:13.595 fused_ordering(7) 00:16:13.595 fused_ordering(8) 00:16:13.595 fused_ordering(9) 00:16:13.595 fused_ordering(10) 00:16:13.595 fused_ordering(11) 00:16:13.595 fused_ordering(12) 00:16:13.595 fused_ordering(13) 00:16:13.595 fused_ordering(14) 00:16:13.595 fused_ordering(15) 00:16:13.595 fused_ordering(16) 00:16:13.595 fused_ordering(17) 00:16:13.595 fused_ordering(18) 00:16:13.595 fused_ordering(19) 00:16:13.595 fused_ordering(20) 00:16:13.595 fused_ordering(21) 00:16:13.595 fused_ordering(22) 00:16:13.595 fused_ordering(23) 00:16:13.595 fused_ordering(24) 00:16:13.595 fused_ordering(25) 00:16:13.595 fused_ordering(26) 00:16:13.595 fused_ordering(27) 00:16:13.595 fused_ordering(28) 00:16:13.595 fused_ordering(29) 00:16:13.595 fused_ordering(30) 00:16:13.595 fused_ordering(31) 00:16:13.595 fused_ordering(32) 00:16:13.595 fused_ordering(33) 00:16:13.595 fused_ordering(34) 00:16:13.595 fused_ordering(35) 00:16:13.595 fused_ordering(36) 00:16:13.596 fused_ordering(37) 00:16:13.596 fused_ordering(38) 00:16:13.596 fused_ordering(39) 00:16:13.596 fused_ordering(40) 00:16:13.596 fused_ordering(41) 00:16:13.596 fused_ordering(42) 00:16:13.596 fused_ordering(43) 00:16:13.596 fused_ordering(44) 00:16:13.596 fused_ordering(45) 00:16:13.596 fused_ordering(46) 00:16:13.596 fused_ordering(47) 00:16:13.596 fused_ordering(48) 00:16:13.596 fused_ordering(49) 00:16:13.596 fused_ordering(50) 00:16:13.596 fused_ordering(51) 00:16:13.596 fused_ordering(52) 00:16:13.596 fused_ordering(53) 00:16:13.596 fused_ordering(54) 00:16:13.596 fused_ordering(55) 00:16:13.596 fused_ordering(56) 00:16:13.596 fused_ordering(57) 00:16:13.596 fused_ordering(58) 00:16:13.596 fused_ordering(59) 00:16:13.596 fused_ordering(60) 00:16:13.596 fused_ordering(61) 00:16:13.596 fused_ordering(62) 00:16:13.596 fused_ordering(63) 00:16:13.596 fused_ordering(64) 00:16:13.596 fused_ordering(65) 00:16:13.596 fused_ordering(66) 00:16:13.596 fused_ordering(67) 00:16:13.596 fused_ordering(68) 00:16:13.596 fused_ordering(69) 00:16:13.596 fused_ordering(70) 00:16:13.596 fused_ordering(71) 00:16:13.596 fused_ordering(72) 00:16:13.596 fused_ordering(73) 00:16:13.596 fused_ordering(74) 00:16:13.596 fused_ordering(75) 00:16:13.596 fused_ordering(76) 00:16:13.596 fused_ordering(77) 00:16:13.596 fused_ordering(78) 00:16:13.596 fused_ordering(79) 00:16:13.596 fused_ordering(80) 00:16:13.596 fused_ordering(81) 00:16:13.596 fused_ordering(82) 00:16:13.596 fused_ordering(83) 00:16:13.596 fused_ordering(84) 00:16:13.596 fused_ordering(85) 00:16:13.596 fused_ordering(86) 00:16:13.596 fused_ordering(87) 00:16:13.596 fused_ordering(88) 00:16:13.596 fused_ordering(89) 00:16:13.596 fused_ordering(90) 00:16:13.596 fused_ordering(91) 00:16:13.596 fused_ordering(92) 00:16:13.596 fused_ordering(93) 00:16:13.596 fused_ordering(94) 00:16:13.596 fused_ordering(95) 00:16:13.596 fused_ordering(96) 00:16:13.596 fused_ordering(97) 00:16:13.596 fused_ordering(98) 00:16:13.596 fused_ordering(99) 00:16:13.596 fused_ordering(100) 00:16:13.596 fused_ordering(101) 00:16:13.596 fused_ordering(102) 00:16:13.596 fused_ordering(103) 00:16:13.596 fused_ordering(104) 00:16:13.596 fused_ordering(105) 00:16:13.596 fused_ordering(106) 00:16:13.596 fused_ordering(107) 00:16:13.596 fused_ordering(108) 00:16:13.596 fused_ordering(109) 00:16:13.596 fused_ordering(110) 00:16:13.596 fused_ordering(111) 00:16:13.596 fused_ordering(112) 00:16:13.596 fused_ordering(113) 00:16:13.596 fused_ordering(114) 00:16:13.596 fused_ordering(115) 00:16:13.596 fused_ordering(116) 00:16:13.596 fused_ordering(117) 00:16:13.596 fused_ordering(118) 00:16:13.596 fused_ordering(119) 00:16:13.596 fused_ordering(120) 00:16:13.596 fused_ordering(121) 00:16:13.596 fused_ordering(122) 00:16:13.596 fused_ordering(123) 00:16:13.596 fused_ordering(124) 00:16:13.596 fused_ordering(125) 00:16:13.596 fused_ordering(126) 00:16:13.596 fused_ordering(127) 00:16:13.596 fused_ordering(128) 00:16:13.596 fused_ordering(129) 00:16:13.596 fused_ordering(130) 00:16:13.596 fused_ordering(131) 00:16:13.596 fused_ordering(132) 00:16:13.596 fused_ordering(133) 00:16:13.596 fused_ordering(134) 00:16:13.596 fused_ordering(135) 00:16:13.596 fused_ordering(136) 00:16:13.596 fused_ordering(137) 00:16:13.596 fused_ordering(138) 00:16:13.596 fused_ordering(139) 00:16:13.596 fused_ordering(140) 00:16:13.596 fused_ordering(141) 00:16:13.596 fused_ordering(142) 00:16:13.596 fused_ordering(143) 00:16:13.596 fused_ordering(144) 00:16:13.596 fused_ordering(145) 00:16:13.596 fused_ordering(146) 00:16:13.596 fused_ordering(147) 00:16:13.596 fused_ordering(148) 00:16:13.596 fused_ordering(149) 00:16:13.596 fused_ordering(150) 00:16:13.596 fused_ordering(151) 00:16:13.596 fused_ordering(152) 00:16:13.596 fused_ordering(153) 00:16:13.596 fused_ordering(154) 00:16:13.596 fused_ordering(155) 00:16:13.596 fused_ordering(156) 00:16:13.596 fused_ordering(157) 00:16:13.596 fused_ordering(158) 00:16:13.596 fused_ordering(159) 00:16:13.596 fused_ordering(160) 00:16:13.596 fused_ordering(161) 00:16:13.596 fused_ordering(162) 00:16:13.596 fused_ordering(163) 00:16:13.596 fused_ordering(164) 00:16:13.596 fused_ordering(165) 00:16:13.596 fused_ordering(166) 00:16:13.596 fused_ordering(167) 00:16:13.596 fused_ordering(168) 00:16:13.596 fused_ordering(169) 00:16:13.596 fused_ordering(170) 00:16:13.596 fused_ordering(171) 00:16:13.596 fused_ordering(172) 00:16:13.596 fused_ordering(173) 00:16:13.596 fused_ordering(174) 00:16:13.596 fused_ordering(175) 00:16:13.596 fused_ordering(176) 00:16:13.596 fused_ordering(177) 00:16:13.596 fused_ordering(178) 00:16:13.596 fused_ordering(179) 00:16:13.596 fused_ordering(180) 00:16:13.596 fused_ordering(181) 00:16:13.596 fused_ordering(182) 00:16:13.596 fused_ordering(183) 00:16:13.596 fused_ordering(184) 00:16:13.596 fused_ordering(185) 00:16:13.596 fused_ordering(186) 00:16:13.596 fused_ordering(187) 00:16:13.596 fused_ordering(188) 00:16:13.596 fused_ordering(189) 00:16:13.596 fused_ordering(190) 00:16:13.596 fused_ordering(191) 00:16:13.596 fused_ordering(192) 00:16:13.596 fused_ordering(193) 00:16:13.596 fused_ordering(194) 00:16:13.596 fused_ordering(195) 00:16:13.596 fused_ordering(196) 00:16:13.596 fused_ordering(197) 00:16:13.596 fused_ordering(198) 00:16:13.596 fused_ordering(199) 00:16:13.596 fused_ordering(200) 00:16:13.596 fused_ordering(201) 00:16:13.596 fused_ordering(202) 00:16:13.596 fused_ordering(203) 00:16:13.596 fused_ordering(204) 00:16:13.596 fused_ordering(205) 00:16:13.858 fused_ordering(206) 00:16:13.858 fused_ordering(207) 00:16:13.858 fused_ordering(208) 00:16:13.858 fused_ordering(209) 00:16:13.858 fused_ordering(210) 00:16:13.858 fused_ordering(211) 00:16:13.858 fused_ordering(212) 00:16:13.858 fused_ordering(213) 00:16:13.858 fused_ordering(214) 00:16:13.858 fused_ordering(215) 00:16:13.858 fused_ordering(216) 00:16:13.858 fused_ordering(217) 00:16:13.858 fused_ordering(218) 00:16:13.858 fused_ordering(219) 00:16:13.858 fused_ordering(220) 00:16:13.858 fused_ordering(221) 00:16:13.858 fused_ordering(222) 00:16:13.858 fused_ordering(223) 00:16:13.858 fused_ordering(224) 00:16:13.858 fused_ordering(225) 00:16:13.858 fused_ordering(226) 00:16:13.858 fused_ordering(227) 00:16:13.858 fused_ordering(228) 00:16:13.858 fused_ordering(229) 00:16:13.858 fused_ordering(230) 00:16:13.858 fused_ordering(231) 00:16:13.858 fused_ordering(232) 00:16:13.858 fused_ordering(233) 00:16:13.858 fused_ordering(234) 00:16:13.858 fused_ordering(235) 00:16:13.858 fused_ordering(236) 00:16:13.858 fused_ordering(237) 00:16:13.858 fused_ordering(238) 00:16:13.858 fused_ordering(239) 00:16:13.858 fused_ordering(240) 00:16:13.858 fused_ordering(241) 00:16:13.858 fused_ordering(242) 00:16:13.858 fused_ordering(243) 00:16:13.858 fused_ordering(244) 00:16:13.858 fused_ordering(245) 00:16:13.858 fused_ordering(246) 00:16:13.858 fused_ordering(247) 00:16:13.858 fused_ordering(248) 00:16:13.858 fused_ordering(249) 00:16:13.858 fused_ordering(250) 00:16:13.858 fused_ordering(251) 00:16:13.858 fused_ordering(252) 00:16:13.858 fused_ordering(253) 00:16:13.858 fused_ordering(254) 00:16:13.858 fused_ordering(255) 00:16:13.858 fused_ordering(256) 00:16:13.858 fused_ordering(257) 00:16:13.858 fused_ordering(258) 00:16:13.858 fused_ordering(259) 00:16:13.858 fused_ordering(260) 00:16:13.858 fused_ordering(261) 00:16:13.858 fused_ordering(262) 00:16:13.858 fused_ordering(263) 00:16:13.858 fused_ordering(264) 00:16:13.858 fused_ordering(265) 00:16:13.858 fused_ordering(266) 00:16:13.858 fused_ordering(267) 00:16:13.858 fused_ordering(268) 00:16:13.858 fused_ordering(269) 00:16:13.858 fused_ordering(270) 00:16:13.858 fused_ordering(271) 00:16:13.858 fused_ordering(272) 00:16:13.858 fused_ordering(273) 00:16:13.858 fused_ordering(274) 00:16:13.858 fused_ordering(275) 00:16:13.858 fused_ordering(276) 00:16:13.858 fused_ordering(277) 00:16:13.858 fused_ordering(278) 00:16:13.858 fused_ordering(279) 00:16:13.858 fused_ordering(280) 00:16:13.858 fused_ordering(281) 00:16:13.858 fused_ordering(282) 00:16:13.858 fused_ordering(283) 00:16:13.858 fused_ordering(284) 00:16:13.858 fused_ordering(285) 00:16:13.858 fused_ordering(286) 00:16:13.858 fused_ordering(287) 00:16:13.858 fused_ordering(288) 00:16:13.858 fused_ordering(289) 00:16:13.858 fused_ordering(290) 00:16:13.858 fused_ordering(291) 00:16:13.858 fused_ordering(292) 00:16:13.858 fused_ordering(293) 00:16:13.858 fused_ordering(294) 00:16:13.858 fused_ordering(295) 00:16:13.858 fused_ordering(296) 00:16:13.858 fused_ordering(297) 00:16:13.858 fused_ordering(298) 00:16:13.858 fused_ordering(299) 00:16:13.858 fused_ordering(300) 00:16:13.858 fused_ordering(301) 00:16:13.858 fused_ordering(302) 00:16:13.858 fused_ordering(303) 00:16:13.858 fused_ordering(304) 00:16:13.858 fused_ordering(305) 00:16:13.858 fused_ordering(306) 00:16:13.858 fused_ordering(307) 00:16:13.858 fused_ordering(308) 00:16:13.858 fused_ordering(309) 00:16:13.858 fused_ordering(310) 00:16:13.858 fused_ordering(311) 00:16:13.858 fused_ordering(312) 00:16:13.858 fused_ordering(313) 00:16:13.858 fused_ordering(314) 00:16:13.858 fused_ordering(315) 00:16:13.858 fused_ordering(316) 00:16:13.858 fused_ordering(317) 00:16:13.858 fused_ordering(318) 00:16:13.858 fused_ordering(319) 00:16:13.858 fused_ordering(320) 00:16:13.858 fused_ordering(321) 00:16:13.858 fused_ordering(322) 00:16:13.858 fused_ordering(323) 00:16:13.858 fused_ordering(324) 00:16:13.858 fused_ordering(325) 00:16:13.858 fused_ordering(326) 00:16:13.858 fused_ordering(327) 00:16:13.858 fused_ordering(328) 00:16:13.858 fused_ordering(329) 00:16:13.858 fused_ordering(330) 00:16:13.858 fused_ordering(331) 00:16:13.858 fused_ordering(332) 00:16:13.858 fused_ordering(333) 00:16:13.858 fused_ordering(334) 00:16:13.858 fused_ordering(335) 00:16:13.858 fused_ordering(336) 00:16:13.858 fused_ordering(337) 00:16:13.858 fused_ordering(338) 00:16:13.858 fused_ordering(339) 00:16:13.858 fused_ordering(340) 00:16:13.858 fused_ordering(341) 00:16:13.858 fused_ordering(342) 00:16:13.858 fused_ordering(343) 00:16:13.858 fused_ordering(344) 00:16:13.858 fused_ordering(345) 00:16:13.858 fused_ordering(346) 00:16:13.858 fused_ordering(347) 00:16:13.858 fused_ordering(348) 00:16:13.858 fused_ordering(349) 00:16:13.858 fused_ordering(350) 00:16:13.858 fused_ordering(351) 00:16:13.858 fused_ordering(352) 00:16:13.858 fused_ordering(353) 00:16:13.858 fused_ordering(354) 00:16:13.858 fused_ordering(355) 00:16:13.858 fused_ordering(356) 00:16:13.858 fused_ordering(357) 00:16:13.858 fused_ordering(358) 00:16:13.858 fused_ordering(359) 00:16:13.858 fused_ordering(360) 00:16:13.858 fused_ordering(361) 00:16:13.858 fused_ordering(362) 00:16:13.858 fused_ordering(363) 00:16:13.858 fused_ordering(364) 00:16:13.858 fused_ordering(365) 00:16:13.858 fused_ordering(366) 00:16:13.858 fused_ordering(367) 00:16:13.858 fused_ordering(368) 00:16:13.858 fused_ordering(369) 00:16:13.858 fused_ordering(370) 00:16:13.858 fused_ordering(371) 00:16:13.858 fused_ordering(372) 00:16:13.858 fused_ordering(373) 00:16:13.858 fused_ordering(374) 00:16:13.858 fused_ordering(375) 00:16:13.858 fused_ordering(376) 00:16:13.858 fused_ordering(377) 00:16:13.858 fused_ordering(378) 00:16:13.858 fused_ordering(379) 00:16:13.858 fused_ordering(380) 00:16:13.858 fused_ordering(381) 00:16:13.858 fused_ordering(382) 00:16:13.858 fused_ordering(383) 00:16:13.858 fused_ordering(384) 00:16:13.858 fused_ordering(385) 00:16:13.858 fused_ordering(386) 00:16:13.858 fused_ordering(387) 00:16:13.858 fused_ordering(388) 00:16:13.858 fused_ordering(389) 00:16:13.858 fused_ordering(390) 00:16:13.858 fused_ordering(391) 00:16:13.858 fused_ordering(392) 00:16:13.858 fused_ordering(393) 00:16:13.858 fused_ordering(394) 00:16:13.858 fused_ordering(395) 00:16:13.858 fused_ordering(396) 00:16:13.858 fused_ordering(397) 00:16:13.858 fused_ordering(398) 00:16:13.858 fused_ordering(399) 00:16:13.858 fused_ordering(400) 00:16:13.858 fused_ordering(401) 00:16:13.858 fused_ordering(402) 00:16:13.858 fused_ordering(403) 00:16:13.858 fused_ordering(404) 00:16:13.858 fused_ordering(405) 00:16:13.858 fused_ordering(406) 00:16:13.858 fused_ordering(407) 00:16:13.858 fused_ordering(408) 00:16:13.858 fused_ordering(409) 00:16:13.858 fused_ordering(410) 00:16:14.174 fused_ordering(411) 00:16:14.174 fused_ordering(412) 00:16:14.174 fused_ordering(413) 00:16:14.174 fused_ordering(414) 00:16:14.174 fused_ordering(415) 00:16:14.174 fused_ordering(416) 00:16:14.174 fused_ordering(417) 00:16:14.174 fused_ordering(418) 00:16:14.174 fused_ordering(419) 00:16:14.174 fused_ordering(420) 00:16:14.174 fused_ordering(421) 00:16:14.174 fused_ordering(422) 00:16:14.174 fused_ordering(423) 00:16:14.174 fused_ordering(424) 00:16:14.174 fused_ordering(425) 00:16:14.174 fused_ordering(426) 00:16:14.174 fused_ordering(427) 00:16:14.174 fused_ordering(428) 00:16:14.174 fused_ordering(429) 00:16:14.174 fused_ordering(430) 00:16:14.174 fused_ordering(431) 00:16:14.174 fused_ordering(432) 00:16:14.174 fused_ordering(433) 00:16:14.174 fused_ordering(434) 00:16:14.174 fused_ordering(435) 00:16:14.174 fused_ordering(436) 00:16:14.174 fused_ordering(437) 00:16:14.174 fused_ordering(438) 00:16:14.174 fused_ordering(439) 00:16:14.174 fused_ordering(440) 00:16:14.174 fused_ordering(441) 00:16:14.174 fused_ordering(442) 00:16:14.174 fused_ordering(443) 00:16:14.174 fused_ordering(444) 00:16:14.174 fused_ordering(445) 00:16:14.174 fused_ordering(446) 00:16:14.174 fused_ordering(447) 00:16:14.174 fused_ordering(448) 00:16:14.174 fused_ordering(449) 00:16:14.174 fused_ordering(450) 00:16:14.174 fused_ordering(451) 00:16:14.174 fused_ordering(452) 00:16:14.174 fused_ordering(453) 00:16:14.174 fused_ordering(454) 00:16:14.174 fused_ordering(455) 00:16:14.174 fused_ordering(456) 00:16:14.174 fused_ordering(457) 00:16:14.174 fused_ordering(458) 00:16:14.174 fused_ordering(459) 00:16:14.174 fused_ordering(460) 00:16:14.174 fused_ordering(461) 00:16:14.174 fused_ordering(462) 00:16:14.174 fused_ordering(463) 00:16:14.174 fused_ordering(464) 00:16:14.174 fused_ordering(465) 00:16:14.174 fused_ordering(466) 00:16:14.174 fused_ordering(467) 00:16:14.174 fused_ordering(468) 00:16:14.174 fused_ordering(469) 00:16:14.174 fused_ordering(470) 00:16:14.174 fused_ordering(471) 00:16:14.174 fused_ordering(472) 00:16:14.174 fused_ordering(473) 00:16:14.174 fused_ordering(474) 00:16:14.174 fused_ordering(475) 00:16:14.174 fused_ordering(476) 00:16:14.174 fused_ordering(477) 00:16:14.174 fused_ordering(478) 00:16:14.174 fused_ordering(479) 00:16:14.174 fused_ordering(480) 00:16:14.174 fused_ordering(481) 00:16:14.174 fused_ordering(482) 00:16:14.174 fused_ordering(483) 00:16:14.174 fused_ordering(484) 00:16:14.174 fused_ordering(485) 00:16:14.174 fused_ordering(486) 00:16:14.174 fused_ordering(487) 00:16:14.174 fused_ordering(488) 00:16:14.174 fused_ordering(489) 00:16:14.174 fused_ordering(490) 00:16:14.174 fused_ordering(491) 00:16:14.174 fused_ordering(492) 00:16:14.174 fused_ordering(493) 00:16:14.174 fused_ordering(494) 00:16:14.174 fused_ordering(495) 00:16:14.174 fused_ordering(496) 00:16:14.174 fused_ordering(497) 00:16:14.174 fused_ordering(498) 00:16:14.174 fused_ordering(499) 00:16:14.174 fused_ordering(500) 00:16:14.174 fused_ordering(501) 00:16:14.174 fused_ordering(502) 00:16:14.174 fused_ordering(503) 00:16:14.174 fused_ordering(504) 00:16:14.174 fused_ordering(505) 00:16:14.174 fused_ordering(506) 00:16:14.174 fused_ordering(507) 00:16:14.174 fused_ordering(508) 00:16:14.174 fused_ordering(509) 00:16:14.174 fused_ordering(510) 00:16:14.174 fused_ordering(511) 00:16:14.174 fused_ordering(512) 00:16:14.174 fused_ordering(513) 00:16:14.174 fused_ordering(514) 00:16:14.174 fused_ordering(515) 00:16:14.174 fused_ordering(516) 00:16:14.174 fused_ordering(517) 00:16:14.174 fused_ordering(518) 00:16:14.174 fused_ordering(519) 00:16:14.174 fused_ordering(520) 00:16:14.174 fused_ordering(521) 00:16:14.174 fused_ordering(522) 00:16:14.174 fused_ordering(523) 00:16:14.174 fused_ordering(524) 00:16:14.174 fused_ordering(525) 00:16:14.174 fused_ordering(526) 00:16:14.174 fused_ordering(527) 00:16:14.174 fused_ordering(528) 00:16:14.174 fused_ordering(529) 00:16:14.174 fused_ordering(530) 00:16:14.174 fused_ordering(531) 00:16:14.174 fused_ordering(532) 00:16:14.174 fused_ordering(533) 00:16:14.174 fused_ordering(534) 00:16:14.174 fused_ordering(535) 00:16:14.174 fused_ordering(536) 00:16:14.174 fused_ordering(537) 00:16:14.174 fused_ordering(538) 00:16:14.174 fused_ordering(539) 00:16:14.174 fused_ordering(540) 00:16:14.174 fused_ordering(541) 00:16:14.174 fused_ordering(542) 00:16:14.174 fused_ordering(543) 00:16:14.174 fused_ordering(544) 00:16:14.174 fused_ordering(545) 00:16:14.174 fused_ordering(546) 00:16:14.174 fused_ordering(547) 00:16:14.174 fused_ordering(548) 00:16:14.174 fused_ordering(549) 00:16:14.174 fused_ordering(550) 00:16:14.174 fused_ordering(551) 00:16:14.174 fused_ordering(552) 00:16:14.174 fused_ordering(553) 00:16:14.174 fused_ordering(554) 00:16:14.174 fused_ordering(555) 00:16:14.174 fused_ordering(556) 00:16:14.174 fused_ordering(557) 00:16:14.174 fused_ordering(558) 00:16:14.174 fused_ordering(559) 00:16:14.174 fused_ordering(560) 00:16:14.174 fused_ordering(561) 00:16:14.174 fused_ordering(562) 00:16:14.174 fused_ordering(563) 00:16:14.174 fused_ordering(564) 00:16:14.174 fused_ordering(565) 00:16:14.174 fused_ordering(566) 00:16:14.174 fused_ordering(567) 00:16:14.174 fused_ordering(568) 00:16:14.174 fused_ordering(569) 00:16:14.174 fused_ordering(570) 00:16:14.174 fused_ordering(571) 00:16:14.174 fused_ordering(572) 00:16:14.174 fused_ordering(573) 00:16:14.174 fused_ordering(574) 00:16:14.174 fused_ordering(575) 00:16:14.174 fused_ordering(576) 00:16:14.174 fused_ordering(577) 00:16:14.174 fused_ordering(578) 00:16:14.174 fused_ordering(579) 00:16:14.174 fused_ordering(580) 00:16:14.174 fused_ordering(581) 00:16:14.174 fused_ordering(582) 00:16:14.174 fused_ordering(583) 00:16:14.174 fused_ordering(584) 00:16:14.174 fused_ordering(585) 00:16:14.174 fused_ordering(586) 00:16:14.174 fused_ordering(587) 00:16:14.174 fused_ordering(588) 00:16:14.174 fused_ordering(589) 00:16:14.174 fused_ordering(590) 00:16:14.174 fused_ordering(591) 00:16:14.174 fused_ordering(592) 00:16:14.174 fused_ordering(593) 00:16:14.174 fused_ordering(594) 00:16:14.174 fused_ordering(595) 00:16:14.174 fused_ordering(596) 00:16:14.174 fused_ordering(597) 00:16:14.174 fused_ordering(598) 00:16:14.174 fused_ordering(599) 00:16:14.174 fused_ordering(600) 00:16:14.174 fused_ordering(601) 00:16:14.174 fused_ordering(602) 00:16:14.174 fused_ordering(603) 00:16:14.174 fused_ordering(604) 00:16:14.174 fused_ordering(605) 00:16:14.174 fused_ordering(606) 00:16:14.174 fused_ordering(607) 00:16:14.174 fused_ordering(608) 00:16:14.174 fused_ordering(609) 00:16:14.174 fused_ordering(610) 00:16:14.174 fused_ordering(611) 00:16:14.174 fused_ordering(612) 00:16:14.174 fused_ordering(613) 00:16:14.174 fused_ordering(614) 00:16:14.175 fused_ordering(615) 00:16:14.813 fused_ordering(616) 00:16:14.813 fused_ordering(617) 00:16:14.813 fused_ordering(618) 00:16:14.813 fused_ordering(619) 00:16:14.813 fused_ordering(620) 00:16:14.813 fused_ordering(621) 00:16:14.813 fused_ordering(622) 00:16:14.813 fused_ordering(623) 00:16:14.813 fused_ordering(624) 00:16:14.813 fused_ordering(625) 00:16:14.813 fused_ordering(626) 00:16:14.813 fused_ordering(627) 00:16:14.813 fused_ordering(628) 00:16:14.813 fused_ordering(629) 00:16:14.813 fused_ordering(630) 00:16:14.813 fused_ordering(631) 00:16:14.813 fused_ordering(632) 00:16:14.813 fused_ordering(633) 00:16:14.813 fused_ordering(634) 00:16:14.813 fused_ordering(635) 00:16:14.813 fused_ordering(636) 00:16:14.813 fused_ordering(637) 00:16:14.813 fused_ordering(638) 00:16:14.813 fused_ordering(639) 00:16:14.813 fused_ordering(640) 00:16:14.813 fused_ordering(641) 00:16:14.813 fused_ordering(642) 00:16:14.813 fused_ordering(643) 00:16:14.813 fused_ordering(644) 00:16:14.813 fused_ordering(645) 00:16:14.813 fused_ordering(646) 00:16:14.813 fused_ordering(647) 00:16:14.813 fused_ordering(648) 00:16:14.813 fused_ordering(649) 00:16:14.813 fused_ordering(650) 00:16:14.813 fused_ordering(651) 00:16:14.813 fused_ordering(652) 00:16:14.813 fused_ordering(653) 00:16:14.813 fused_ordering(654) 00:16:14.813 fused_ordering(655) 00:16:14.813 fused_ordering(656) 00:16:14.813 fused_ordering(657) 00:16:14.813 fused_ordering(658) 00:16:14.813 fused_ordering(659) 00:16:14.813 fused_ordering(660) 00:16:14.813 fused_ordering(661) 00:16:14.813 fused_ordering(662) 00:16:14.813 fused_ordering(663) 00:16:14.813 fused_ordering(664) 00:16:14.813 fused_ordering(665) 00:16:14.813 fused_ordering(666) 00:16:14.814 fused_ordering(667) 00:16:14.814 fused_ordering(668) 00:16:14.814 fused_ordering(669) 00:16:14.814 fused_ordering(670) 00:16:14.814 fused_ordering(671) 00:16:14.814 fused_ordering(672) 00:16:14.814 fused_ordering(673) 00:16:14.814 fused_ordering(674) 00:16:14.814 fused_ordering(675) 00:16:14.814 fused_ordering(676) 00:16:14.814 fused_ordering(677) 00:16:14.814 fused_ordering(678) 00:16:14.814 fused_ordering(679) 00:16:14.814 fused_ordering(680) 00:16:14.814 fused_ordering(681) 00:16:14.814 fused_ordering(682) 00:16:14.814 fused_ordering(683) 00:16:14.814 fused_ordering(684) 00:16:14.814 fused_ordering(685) 00:16:14.814 fused_ordering(686) 00:16:14.814 fused_ordering(687) 00:16:14.814 fused_ordering(688) 00:16:14.814 fused_ordering(689) 00:16:14.814 fused_ordering(690) 00:16:14.814 fused_ordering(691) 00:16:14.814 fused_ordering(692) 00:16:14.814 fused_ordering(693) 00:16:14.814 fused_ordering(694) 00:16:14.814 fused_ordering(695) 00:16:14.814 fused_ordering(696) 00:16:14.814 fused_ordering(697) 00:16:14.814 fused_ordering(698) 00:16:14.814 fused_ordering(699) 00:16:14.814 fused_ordering(700) 00:16:14.814 fused_ordering(701) 00:16:14.814 fused_ordering(702) 00:16:14.814 fused_ordering(703) 00:16:14.814 fused_ordering(704) 00:16:14.814 fused_ordering(705) 00:16:14.814 fused_ordering(706) 00:16:14.814 fused_ordering(707) 00:16:14.814 fused_ordering(708) 00:16:14.814 fused_ordering(709) 00:16:14.814 fused_ordering(710) 00:16:14.814 fused_ordering(711) 00:16:14.814 fused_ordering(712) 00:16:14.814 fused_ordering(713) 00:16:14.814 fused_ordering(714) 00:16:14.814 fused_ordering(715) 00:16:14.814 fused_ordering(716) 00:16:14.814 fused_ordering(717) 00:16:14.814 fused_ordering(718) 00:16:14.814 fused_ordering(719) 00:16:14.814 fused_ordering(720) 00:16:14.814 fused_ordering(721) 00:16:14.814 fused_ordering(722) 00:16:14.814 fused_ordering(723) 00:16:14.814 fused_ordering(724) 00:16:14.814 fused_ordering(725) 00:16:14.814 fused_ordering(726) 00:16:14.814 fused_ordering(727) 00:16:14.814 fused_ordering(728) 00:16:14.814 fused_ordering(729) 00:16:14.814 fused_ordering(730) 00:16:14.814 fused_ordering(731) 00:16:14.814 fused_ordering(732) 00:16:14.814 fused_ordering(733) 00:16:14.814 fused_ordering(734) 00:16:14.814 fused_ordering(735) 00:16:14.814 fused_ordering(736) 00:16:14.814 fused_ordering(737) 00:16:14.814 fused_ordering(738) 00:16:14.814 fused_ordering(739) 00:16:14.814 fused_ordering(740) 00:16:14.814 fused_ordering(741) 00:16:14.814 fused_ordering(742) 00:16:14.814 fused_ordering(743) 00:16:14.814 fused_ordering(744) 00:16:14.814 fused_ordering(745) 00:16:14.814 fused_ordering(746) 00:16:14.814 fused_ordering(747) 00:16:14.814 fused_ordering(748) 00:16:14.814 fused_ordering(749) 00:16:14.814 fused_ordering(750) 00:16:14.814 fused_ordering(751) 00:16:14.814 fused_ordering(752) 00:16:14.814 fused_ordering(753) 00:16:14.814 fused_ordering(754) 00:16:14.814 fused_ordering(755) 00:16:14.814 fused_ordering(756) 00:16:14.814 fused_ordering(757) 00:16:14.814 fused_ordering(758) 00:16:14.814 fused_ordering(759) 00:16:14.814 fused_ordering(760) 00:16:14.814 fused_ordering(761) 00:16:14.814 fused_ordering(762) 00:16:14.814 fused_ordering(763) 00:16:14.814 fused_ordering(764) 00:16:14.814 fused_ordering(765) 00:16:14.814 fused_ordering(766) 00:16:14.814 fused_ordering(767) 00:16:14.814 fused_ordering(768) 00:16:14.814 fused_ordering(769) 00:16:14.814 fused_ordering(770) 00:16:14.814 fused_ordering(771) 00:16:14.814 fused_ordering(772) 00:16:14.814 fused_ordering(773) 00:16:14.814 fused_ordering(774) 00:16:14.814 fused_ordering(775) 00:16:14.814 fused_ordering(776) 00:16:14.814 fused_ordering(777) 00:16:14.814 fused_ordering(778) 00:16:14.814 fused_ordering(779) 00:16:14.814 fused_ordering(780) 00:16:14.814 fused_ordering(781) 00:16:14.814 fused_ordering(782) 00:16:14.814 fused_ordering(783) 00:16:14.814 fused_ordering(784) 00:16:14.814 fused_ordering(785) 00:16:14.814 fused_ordering(786) 00:16:14.814 fused_ordering(787) 00:16:14.814 fused_ordering(788) 00:16:14.814 fused_ordering(789) 00:16:14.814 fused_ordering(790) 00:16:14.814 fused_ordering(791) 00:16:14.814 fused_ordering(792) 00:16:14.814 fused_ordering(793) 00:16:14.814 fused_ordering(794) 00:16:14.814 fused_ordering(795) 00:16:14.814 fused_ordering(796) 00:16:14.814 fused_ordering(797) 00:16:14.814 fused_ordering(798) 00:16:14.814 fused_ordering(799) 00:16:14.814 fused_ordering(800) 00:16:14.814 fused_ordering(801) 00:16:14.814 fused_ordering(802) 00:16:14.814 fused_ordering(803) 00:16:14.814 fused_ordering(804) 00:16:14.814 fused_ordering(805) 00:16:14.814 fused_ordering(806) 00:16:14.814 fused_ordering(807) 00:16:14.814 fused_ordering(808) 00:16:14.814 fused_ordering(809) 00:16:14.814 fused_ordering(810) 00:16:14.814 fused_ordering(811) 00:16:14.814 fused_ordering(812) 00:16:14.814 fused_ordering(813) 00:16:14.814 fused_ordering(814) 00:16:14.814 fused_ordering(815) 00:16:14.814 fused_ordering(816) 00:16:14.814 fused_ordering(817) 00:16:14.814 fused_ordering(818) 00:16:14.814 fused_ordering(819) 00:16:14.814 fused_ordering(820) 00:16:15.388 fused_ordering(821) 00:16:15.388 fused_ordering(822) 00:16:15.388 fused_ordering(823) 00:16:15.388 fused_ordering(824) 00:16:15.388 fused_ordering(825) 00:16:15.388 fused_ordering(826) 00:16:15.388 fused_ordering(827) 00:16:15.388 fused_ordering(828) 00:16:15.388 fused_ordering(829) 00:16:15.388 fused_ordering(830) 00:16:15.388 fused_ordering(831) 00:16:15.388 fused_ordering(832) 00:16:15.388 fused_ordering(833) 00:16:15.388 fused_ordering(834) 00:16:15.388 fused_ordering(835) 00:16:15.388 fused_ordering(836) 00:16:15.388 fused_ordering(837) 00:16:15.388 fused_ordering(838) 00:16:15.388 fused_ordering(839) 00:16:15.388 fused_ordering(840) 00:16:15.388 fused_ordering(841) 00:16:15.388 fused_ordering(842) 00:16:15.388 fused_ordering(843) 00:16:15.388 fused_ordering(844) 00:16:15.388 fused_ordering(845) 00:16:15.388 fused_ordering(846) 00:16:15.388 fused_ordering(847) 00:16:15.388 fused_ordering(848) 00:16:15.388 fused_ordering(849) 00:16:15.388 fused_ordering(850) 00:16:15.388 fused_ordering(851) 00:16:15.388 fused_ordering(852) 00:16:15.388 fused_ordering(853) 00:16:15.388 fused_ordering(854) 00:16:15.388 fused_ordering(855) 00:16:15.388 fused_ordering(856) 00:16:15.388 fused_ordering(857) 00:16:15.388 fused_ordering(858) 00:16:15.388 fused_ordering(859) 00:16:15.388 fused_ordering(860) 00:16:15.388 fused_ordering(861) 00:16:15.388 fused_ordering(862) 00:16:15.388 fused_ordering(863) 00:16:15.388 fused_ordering(864) 00:16:15.388 fused_ordering(865) 00:16:15.388 fused_ordering(866) 00:16:15.388 fused_ordering(867) 00:16:15.388 fused_ordering(868) 00:16:15.388 fused_ordering(869) 00:16:15.388 fused_ordering(870) 00:16:15.388 fused_ordering(871) 00:16:15.388 fused_ordering(872) 00:16:15.388 fused_ordering(873) 00:16:15.388 fused_ordering(874) 00:16:15.388 fused_ordering(875) 00:16:15.388 fused_ordering(876) 00:16:15.388 fused_ordering(877) 00:16:15.388 fused_ordering(878) 00:16:15.388 fused_ordering(879) 00:16:15.388 fused_ordering(880) 00:16:15.388 fused_ordering(881) 00:16:15.388 fused_ordering(882) 00:16:15.388 fused_ordering(883) 00:16:15.388 fused_ordering(884) 00:16:15.388 fused_ordering(885) 00:16:15.388 fused_ordering(886) 00:16:15.388 fused_ordering(887) 00:16:15.388 fused_ordering(888) 00:16:15.388 fused_ordering(889) 00:16:15.388 fused_ordering(890) 00:16:15.388 fused_ordering(891) 00:16:15.388 fused_ordering(892) 00:16:15.388 fused_ordering(893) 00:16:15.388 fused_ordering(894) 00:16:15.388 fused_ordering(895) 00:16:15.388 fused_ordering(896) 00:16:15.388 fused_ordering(897) 00:16:15.388 fused_ordering(898) 00:16:15.388 fused_ordering(899) 00:16:15.388 fused_ordering(900) 00:16:15.389 fused_ordering(901) 00:16:15.389 fused_ordering(902) 00:16:15.389 fused_ordering(903) 00:16:15.389 fused_ordering(904) 00:16:15.389 fused_ordering(905) 00:16:15.389 fused_ordering(906) 00:16:15.389 fused_ordering(907) 00:16:15.389 fused_ordering(908) 00:16:15.389 fused_ordering(909) 00:16:15.389 fused_ordering(910) 00:16:15.389 fused_ordering(911) 00:16:15.389 fused_ordering(912) 00:16:15.389 fused_ordering(913) 00:16:15.389 fused_ordering(914) 00:16:15.389 fused_ordering(915) 00:16:15.389 fused_ordering(916) 00:16:15.389 fused_ordering(917) 00:16:15.389 fused_ordering(918) 00:16:15.389 fused_ordering(919) 00:16:15.389 fused_ordering(920) 00:16:15.389 fused_ordering(921) 00:16:15.389 fused_ordering(922) 00:16:15.389 fused_ordering(923) 00:16:15.389 fused_ordering(924) 00:16:15.389 fused_ordering(925) 00:16:15.389 fused_ordering(926) 00:16:15.389 fused_ordering(927) 00:16:15.389 fused_ordering(928) 00:16:15.389 fused_ordering(929) 00:16:15.389 fused_ordering(930) 00:16:15.389 fused_ordering(931) 00:16:15.389 fused_ordering(932) 00:16:15.389 fused_ordering(933) 00:16:15.389 fused_ordering(934) 00:16:15.389 fused_ordering(935) 00:16:15.389 fused_ordering(936) 00:16:15.389 fused_ordering(937) 00:16:15.389 fused_ordering(938) 00:16:15.389 fused_ordering(939) 00:16:15.389 fused_ordering(940) 00:16:15.389 fused_ordering(941) 00:16:15.389 fused_ordering(942) 00:16:15.389 fused_ordering(943) 00:16:15.389 fused_ordering(944) 00:16:15.389 fused_ordering(945) 00:16:15.389 fused_ordering(946) 00:16:15.389 fused_ordering(947) 00:16:15.389 fused_ordering(948) 00:16:15.389 fused_ordering(949) 00:16:15.389 fused_ordering(950) 00:16:15.389 fused_ordering(951) 00:16:15.389 fused_ordering(952) 00:16:15.389 fused_ordering(953) 00:16:15.389 fused_ordering(954) 00:16:15.389 fused_ordering(955) 00:16:15.389 fused_ordering(956) 00:16:15.389 fused_ordering(957) 00:16:15.389 fused_ordering(958) 00:16:15.389 fused_ordering(959) 00:16:15.389 fused_ordering(960) 00:16:15.389 fused_ordering(961) 00:16:15.389 fused_ordering(962) 00:16:15.389 fused_ordering(963) 00:16:15.389 fused_ordering(964) 00:16:15.389 fused_ordering(965) 00:16:15.389 fused_ordering(966) 00:16:15.389 fused_ordering(967) 00:16:15.389 fused_ordering(968) 00:16:15.389 fused_ordering(969) 00:16:15.389 fused_ordering(970) 00:16:15.389 fused_ordering(971) 00:16:15.389 fused_ordering(972) 00:16:15.389 fused_ordering(973) 00:16:15.389 fused_ordering(974) 00:16:15.389 fused_ordering(975) 00:16:15.389 fused_ordering(976) 00:16:15.389 fused_ordering(977) 00:16:15.389 fused_ordering(978) 00:16:15.389 fused_ordering(979) 00:16:15.389 fused_ordering(980) 00:16:15.389 fused_ordering(981) 00:16:15.389 fused_ordering(982) 00:16:15.389 fused_ordering(983) 00:16:15.389 fused_ordering(984) 00:16:15.389 fused_ordering(985) 00:16:15.389 fused_ordering(986) 00:16:15.389 fused_ordering(987) 00:16:15.389 fused_ordering(988) 00:16:15.389 fused_ordering(989) 00:16:15.389 fused_ordering(990) 00:16:15.389 fused_ordering(991) 00:16:15.389 fused_ordering(992) 00:16:15.389 fused_ordering(993) 00:16:15.389 fused_ordering(994) 00:16:15.389 fused_ordering(995) 00:16:15.389 fused_ordering(996) 00:16:15.389 fused_ordering(997) 00:16:15.389 fused_ordering(998) 00:16:15.389 fused_ordering(999) 00:16:15.389 fused_ordering(1000) 00:16:15.389 fused_ordering(1001) 00:16:15.389 fused_ordering(1002) 00:16:15.389 fused_ordering(1003) 00:16:15.389 fused_ordering(1004) 00:16:15.389 fused_ordering(1005) 00:16:15.389 fused_ordering(1006) 00:16:15.389 fused_ordering(1007) 00:16:15.389 fused_ordering(1008) 00:16:15.389 fused_ordering(1009) 00:16:15.389 fused_ordering(1010) 00:16:15.389 fused_ordering(1011) 00:16:15.389 fused_ordering(1012) 00:16:15.389 fused_ordering(1013) 00:16:15.389 fused_ordering(1014) 00:16:15.389 fused_ordering(1015) 00:16:15.389 fused_ordering(1016) 00:16:15.389 fused_ordering(1017) 00:16:15.389 fused_ordering(1018) 00:16:15.389 fused_ordering(1019) 00:16:15.389 fused_ordering(1020) 00:16:15.389 fused_ordering(1021) 00:16:15.389 fused_ordering(1022) 00:16:15.389 fused_ordering(1023) 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:15.389 rmmod nvme_tcp 00:16:15.389 rmmod nvme_fabrics 00:16:15.389 rmmod nvme_keyring 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1647470 ']' 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1647470 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1647470 ']' 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1647470 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.389 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647470 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647470' 00:16:15.650 killing process with pid 1647470 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1647470 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1647470 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.650 17:32:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:18.195 00:16:18.195 real 0m13.573s 00:16:18.195 user 0m7.233s 00:16:18.195 sys 0m7.308s 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:18.195 ************************************ 00:16:18.195 END TEST nvmf_fused_ordering 00:16:18.195 ************************************ 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.195 ************************************ 00:16:18.195 START TEST nvmf_ns_masking 00:16:18.195 ************************************ 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:18.195 * Looking for test storage... 00:16:18.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:16:18.195 17:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.195 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.196 --rc genhtml_branch_coverage=1 00:16:18.196 --rc genhtml_function_coverage=1 00:16:18.196 --rc genhtml_legend=1 00:16:18.196 --rc geninfo_all_blocks=1 00:16:18.196 --rc geninfo_unexecuted_blocks=1 00:16:18.196 00:16:18.196 ' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.196 --rc genhtml_branch_coverage=1 00:16:18.196 --rc genhtml_function_coverage=1 00:16:18.196 --rc genhtml_legend=1 00:16:18.196 --rc geninfo_all_blocks=1 00:16:18.196 --rc geninfo_unexecuted_blocks=1 00:16:18.196 00:16:18.196 ' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.196 --rc genhtml_branch_coverage=1 00:16:18.196 --rc genhtml_function_coverage=1 00:16:18.196 --rc genhtml_legend=1 00:16:18.196 --rc geninfo_all_blocks=1 00:16:18.196 --rc geninfo_unexecuted_blocks=1 00:16:18.196 00:16:18.196 ' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:18.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.196 --rc genhtml_branch_coverage=1 00:16:18.196 --rc genhtml_function_coverage=1 00:16:18.196 --rc genhtml_legend=1 00:16:18.196 --rc geninfo_all_blocks=1 00:16:18.196 --rc geninfo_unexecuted_blocks=1 00:16:18.196 00:16:18.196 ' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.196 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=178c20a5-9d17-4ba5-b819-5aa4fa0298fd 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d493649a-56df-443c-b6e9-68fb5e971e40 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b46884ed-7667-469c-98c4-1ac73930cdf2 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:18.197 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:26.338 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:26.338 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.338 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:26.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:26.339 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:26.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:16:26.339 00:16:26.339 --- 10.0.0.2 ping statistics --- 00:16:26.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.339 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:16:26.339 00:16:26.339 --- 10.0.0.1 ping statistics --- 00:16:26.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.339 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1649991 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1649991 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1649991 ']' 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.339 17:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:26.339 [2024-12-06 17:32:17.563481] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:16:26.340 [2024-12-06 17:32:17.563549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.340 [2024-12-06 17:32:17.662672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.340 [2024-12-06 17:32:17.712740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.340 [2024-12-06 17:32:17.712792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.340 [2024-12-06 17:32:17.712801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.340 [2024-12-06 17:32:17.712808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.340 [2024-12-06 17:32:17.712815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.340 [2024-12-06 17:32:17.713597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.340 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.340 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:26.340 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:26.340 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:26.340 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.600 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:26.600 [2024-12-06 17:32:18.589184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.600 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:26.600 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:26.600 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:26.861 Malloc1 00:16:26.861 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:27.121 Malloc2 00:16:27.121 17:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:27.121 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:27.382 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.643 [2024-12-06 17:32:19.524413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.643 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:27.643 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b46884ed-7667-469c-98c4-1ac73930cdf2 -a 10.0.0.2 -s 4420 -i 4 00:16:27.903 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:27.903 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:27.903 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.903 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:27.903 17:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:29.816 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.077 [ 0]:0x1 00:16:30.077 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.077 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.077 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a4df307a93c647f78f6c4cd4f56720c8 00:16:30.077 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a4df307a93c647f78f6c4cd4f56720c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.077 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:30.077 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:30.077 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.077 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.077 [ 0]:0x1 00:16:30.077 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.077 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a4df307a93c647f78f6c4cd4f56720c8 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a4df307a93c647f78f6c4cd4f56720c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:30.338 [ 1]:0x2 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.338 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.599 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b46884ed-7667-469c-98c4-1ac73930cdf2 -a 10.0.0.2 -s 4420 -i 4 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:30.860 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.402 17:32:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.402 [ 0]:0x2 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.402 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.402 [ 0]:0x1 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a4df307a93c647f78f6c4cd4f56720c8 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a4df307a93c647f78f6c4cd4f56720c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:33.403 [ 1]:0x2 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:33.403 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:33.662 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:33.662 [ 0]:0x2 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.922 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:34.182 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:34.182 17:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b46884ed-7667-469c-98c4-1ac73930cdf2 -a 10.0.0.2 -s 4420 -i 4 00:16:34.182 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:34.182 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.182 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.182 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:34.182 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:34.182 17:32:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:36.093 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:36.355 [ 0]:0x1 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a4df307a93c647f78f6c4cd4f56720c8 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a4df307a93c647f78f6c4cd4f56720c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:36.355 [ 1]:0x2 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.355 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.616 [ 0]:0x2 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:36.616 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:36.877 [2024-12-06 17:32:28.809714] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:36.877 request: 00:16:36.877 { 00:16:36.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.877 "nsid": 2, 00:16:36.877 "host": "nqn.2016-06.io.spdk:host1", 00:16:36.877 "method": "nvmf_ns_remove_host", 00:16:36.877 "req_id": 1 00:16:36.877 } 00:16:36.877 Got JSON-RPC error response 00:16:36.877 response: 00:16:36.877 { 00:16:36.877 "code": -32602, 00:16:36.877 "message": "Invalid parameters" 00:16:36.877 } 00:16:36.877 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:36.877 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.877 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.877 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.877 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:36.878 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:37.138 [ 0]:0x2 00:16:37.138 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:37.138 17:32:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4ab43613a1be4439bfa3b2f54dccb1e6 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4ab43613a1be4439bfa3b2f54dccb1e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1650285 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1650285 /var/tmp/host.sock 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1650285 ']' 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:37.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.138 17:32:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.398 [2024-12-06 17:32:29.226052] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:16:37.398 [2024-12-06 17:32:29.226102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650285 ] 00:16:37.399 [2024-12-06 17:32:29.315052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.399 [2024-12-06 17:32:29.350807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.970 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.970 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:37.970 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.231 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:38.491 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 178c20a5-9d17-4ba5-b819-5aa4fa0298fd 00:16:38.491 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:38.491 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 178C20A59D174BA5B8195AA4FA0298FD -i 00:16:38.753 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d493649a-56df-443c-b6e9-68fb5e971e40 00:16:38.753 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:38.753 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D493649A56DF443CB6E968FB5E971E40 -i 00:16:38.753 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:39.013 17:32:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:39.274 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:39.274 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:39.534 nvme0n1 00:16:39.534 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:39.534 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:39.795 nvme1n2 00:16:39.795 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:39.795 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:39.795 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:39.795 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:39.795 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:40.056 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:40.056 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:40.056 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:40.056 17:32:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:40.056 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 178c20a5-9d17-4ba5-b819-5aa4fa0298fd == \1\7\8\c\2\0\a\5\-\9\d\1\7\-\4\b\a\5\-\b\8\1\9\-\5\a\a\4\f\a\0\2\9\8\f\d ]] 00:16:40.056 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:40.056 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:40.056 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:40.317 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d493649a-56df-443c-b6e9-68fb5e971e40 == \d\4\9\3\6\4\9\a\-\5\6\d\f\-\4\4\3\c\-\b\6\e\9\-\6\8\f\b\5\e\9\7\1\e\4\0 ]] 00:16:40.317 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 178c20a5-9d17-4ba5-b819-5aa4fa0298fd 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 178C20A59D174BA5B8195AA4FA0298FD 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 178C20A59D174BA5B8195AA4FA0298FD 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:40.579 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 178C20A59D174BA5B8195AA4FA0298FD 00:16:40.840 [2024-12-06 17:32:32.792102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:40.840 [2024-12-06 17:32:32.792129] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:40.840 [2024-12-06 17:32:32.792136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.840 request: 00:16:40.840 { 00:16:40.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.840 "namespace": { 00:16:40.840 "bdev_name": "invalid", 00:16:40.840 "nsid": 1, 00:16:40.840 "nguid": "178C20A59D174BA5B8195AA4FA0298FD", 00:16:40.840 "no_auto_visible": false, 00:16:40.840 "hide_metadata": false 00:16:40.840 }, 00:16:40.840 "method": "nvmf_subsystem_add_ns", 00:16:40.840 "req_id": 1 00:16:40.840 } 00:16:40.840 Got JSON-RPC error response 00:16:40.840 response: 00:16:40.840 { 00:16:40.840 "code": -32602, 00:16:40.840 "message": "Invalid parameters" 00:16:40.840 } 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 178c20a5-9d17-4ba5-b819-5aa4fa0298fd 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:40.840 17:32:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 178C20A59D174BA5B8195AA4FA0298FD -i 00:16:41.109 17:32:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:43.016 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:43.016 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:43.016 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:43.275 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1650285 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1650285 ']' 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1650285 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650285 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650285' 00:16:43.276 killing process with pid 1650285 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1650285 00:16:43.276 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1650285 00:16:43.535 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.794 rmmod nvme_tcp 00:16:43.794 rmmod nvme_fabrics 00:16:43.794 rmmod nvme_keyring 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1649991 ']' 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1649991 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1649991 ']' 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1649991 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1649991 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1649991' 00:16:43.794 killing process with pid 1649991 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1649991 00:16:43.794 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1649991 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.053 17:32:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.959 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:45.959 00:16:45.959 real 0m28.184s 00:16:45.959 user 0m32.007s 00:16:45.959 sys 0m8.241s 00:16:45.959 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.959 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:45.959 ************************************ 00:16:45.959 END TEST nvmf_ns_masking 00:16:45.959 ************************************ 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.220 ************************************ 00:16:46.220 START TEST nvmf_nvme_cli 00:16:46.220 ************************************ 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:46.220 * Looking for test storage... 00:16:46.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:46.220 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.481 --rc genhtml_branch_coverage=1 00:16:46.481 --rc genhtml_function_coverage=1 00:16:46.481 --rc genhtml_legend=1 00:16:46.481 --rc geninfo_all_blocks=1 00:16:46.481 --rc geninfo_unexecuted_blocks=1 00:16:46.481 00:16:46.481 ' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.481 --rc genhtml_branch_coverage=1 00:16:46.481 --rc genhtml_function_coverage=1 00:16:46.481 --rc genhtml_legend=1 00:16:46.481 --rc geninfo_all_blocks=1 00:16:46.481 --rc geninfo_unexecuted_blocks=1 00:16:46.481 00:16:46.481 ' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.481 --rc genhtml_branch_coverage=1 00:16:46.481 --rc genhtml_function_coverage=1 00:16:46.481 --rc genhtml_legend=1 00:16:46.481 --rc geninfo_all_blocks=1 00:16:46.481 --rc geninfo_unexecuted_blocks=1 00:16:46.481 00:16:46.481 ' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.481 --rc genhtml_branch_coverage=1 00:16:46.481 --rc genhtml_function_coverage=1 00:16:46.481 --rc genhtml_legend=1 00:16:46.481 --rc geninfo_all_blocks=1 00:16:46.481 --rc geninfo_unexecuted_blocks=1 00:16:46.481 00:16:46.481 ' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.481 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:46.482 17:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.620 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:54.621 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:54.621 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:54.621 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:54.621 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:54.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:16:54.621 00:16:54.621 --- 10.0.0.2 ping statistics --- 00:16:54.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.621 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:16:54.621 00:16:54.621 --- 10.0.0.1 ping statistics --- 00:16:54.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.621 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.621 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1652866 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1652866 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1652866 ']' 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.622 17:32:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.622 [2024-12-06 17:32:45.793797] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:16:54.622 [2024-12-06 17:32:45.793868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.622 [2024-12-06 17:32:45.892140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.622 [2024-12-06 17:32:45.947661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.622 [2024-12-06 17:32:45.947718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.622 [2024-12-06 17:32:45.947727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.622 [2024-12-06 17:32:45.947734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.622 [2024-12-06 17:32:45.947740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.622 [2024-12-06 17:32:45.949739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.622 [2024-12-06 17:32:45.949899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.622 [2024-12-06 17:32:45.950062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.622 [2024-12-06 17:32:45.950062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.622 [2024-12-06 17:32:46.663179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.622 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.884 Malloc0 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.884 Malloc1 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.884 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.885 [2024-12-06 17:32:46.761556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.885 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:54.885 00:16:54.885 Discovery Log Number of Records 2, Generation counter 2 00:16:54.885 =====Discovery Log Entry 0====== 00:16:54.885 trtype: tcp 00:16:54.885 adrfam: ipv4 00:16:54.885 subtype: current discovery subsystem 00:16:54.885 treq: not required 00:16:54.885 portid: 0 00:16:54.885 trsvcid: 4420 00:16:54.885 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:54.885 traddr: 10.0.0.2 00:16:54.885 eflags: explicit discovery connections, duplicate discovery information 00:16:54.885 sectype: none 00:16:54.885 =====Discovery Log Entry 1====== 00:16:54.885 trtype: tcp 00:16:54.885 adrfam: ipv4 00:16:54.885 subtype: nvme subsystem 00:16:54.885 treq: not required 00:16:54.885 portid: 0 00:16:54.885 trsvcid: 4420 00:16:54.885 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:54.885 traddr: 10.0.0.2 00:16:54.885 eflags: none 00:16:54.885 sectype: none 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:55.147 17:32:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.529 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:56.529 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:56.529 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.529 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:56.529 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:56.529 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.065 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:59.066 /dev/nvme0n2 ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.066 rmmod nvme_tcp 00:16:59.066 rmmod nvme_fabrics 00:16:59.066 rmmod nvme_keyring 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1652866 ']' 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1652866 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1652866 ']' 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1652866 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1652866 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1652866' 00:16:59.066 killing process with pid 1652866 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1652866 00:16:59.066 17:32:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1652866 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.066 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:01.616 00:17:01.616 real 0m14.992s 00:17:01.616 user 0m22.447s 00:17:01.616 sys 0m6.280s 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:01.616 ************************************ 00:17:01.616 END TEST nvmf_nvme_cli 00:17:01.616 ************************************ 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.616 ************************************ 00:17:01.616 START TEST nvmf_vfio_user 00:17:01.616 ************************************ 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:01.616 * Looking for test storage... 00:17:01.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.616 --rc genhtml_branch_coverage=1 00:17:01.616 --rc genhtml_function_coverage=1 00:17:01.616 --rc genhtml_legend=1 00:17:01.616 --rc geninfo_all_blocks=1 00:17:01.616 --rc geninfo_unexecuted_blocks=1 00:17:01.616 00:17:01.616 ' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.616 --rc genhtml_branch_coverage=1 00:17:01.616 --rc genhtml_function_coverage=1 00:17:01.616 --rc genhtml_legend=1 00:17:01.616 --rc geninfo_all_blocks=1 00:17:01.616 --rc geninfo_unexecuted_blocks=1 00:17:01.616 00:17:01.616 ' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.616 --rc genhtml_branch_coverage=1 00:17:01.616 --rc genhtml_function_coverage=1 00:17:01.616 --rc genhtml_legend=1 00:17:01.616 --rc geninfo_all_blocks=1 00:17:01.616 --rc geninfo_unexecuted_blocks=1 00:17:01.616 00:17:01.616 ' 00:17:01.616 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.616 --rc genhtml_branch_coverage=1 00:17:01.617 --rc genhtml_function_coverage=1 00:17:01.617 --rc genhtml_legend=1 00:17:01.617 --rc geninfo_all_blocks=1 00:17:01.617 --rc geninfo_unexecuted_blocks=1 00:17:01.617 00:17:01.617 ' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1653097 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1653097' 00:17:01.617 Process pid: 1653097 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1653097 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1653097 ']' 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.617 17:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:01.617 [2024-12-06 17:32:53.472218] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:17:01.617 [2024-12-06 17:32:53.472293] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.617 [2024-12-06 17:32:53.563075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.617 [2024-12-06 17:32:53.597798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.617 [2024-12-06 17:32:53.597832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.617 [2024-12-06 17:32:53.597839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.617 [2024-12-06 17:32:53.597844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.617 [2024-12-06 17:32:53.597848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.617 [2024-12-06 17:32:53.599311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.617 [2024-12-06 17:32:53.599465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.617 [2024-12-06 17:32:53.599619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.617 [2024-12-06 17:32:53.599621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.554 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.554 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:02.554 17:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:03.494 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:03.494 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:03.494 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:03.495 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:03.495 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:03.495 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:03.754 Malloc1 00:17:03.754 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:04.014 17:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:04.014 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:04.274 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:04.274 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:04.274 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:04.534 Malloc2 00:17:04.534 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:04.534 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:04.794 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:05.056 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:05.056 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:05.056 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:05.056 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:05.056 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:05.056 17:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:05.056 [2024-12-06 17:32:56.982080] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:17:05.056 [2024-12-06 17:32:56.982133] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653162 ] 00:17:05.056 [2024-12-06 17:32:57.020028] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:05.056 [2024-12-06 17:32:57.025324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:05.056 [2024-12-06 17:32:57.025342] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1cc2cec000 00:17:05.056 [2024-12-06 17:32:57.026325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.027342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.028334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.029336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.030334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.034643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.035355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.036364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:05.056 [2024-12-06 17:32:57.037379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:05.056 [2024-12-06 17:32:57.037386] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1cc2ce1000 00:17:05.056 [2024-12-06 17:32:57.038299] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:05.056 [2024-12-06 17:32:57.045755] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:05.056 [2024-12-06 17:32:57.045771] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:05.056 [2024-12-06 17:32:57.051460] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:05.056 [2024-12-06 17:32:57.051492] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:05.056 [2024-12-06 17:32:57.051561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:05.056 [2024-12-06 17:32:57.051574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:05.056 [2024-12-06 17:32:57.051578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:05.056 [2024-12-06 17:32:57.052456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:05.056 [2024-12-06 17:32:57.052463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:05.056 [2024-12-06 17:32:57.052468] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:05.057 [2024-12-06 17:32:57.053459] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:05.057 [2024-12-06 17:32:57.053465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:05.057 [2024-12-06 17:32:57.053471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:05.057 [2024-12-06 17:32:57.054466] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:05.057 [2024-12-06 17:32:57.054473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:05.057 [2024-12-06 17:32:57.055477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:05.057 [2024-12-06 17:32:57.055483] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:05.057 [2024-12-06 17:32:57.055486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:05.057 [2024-12-06 17:32:57.055491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:05.057 [2024-12-06 17:32:57.055597] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:05.057 [2024-12-06 17:32:57.055601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:05.057 [2024-12-06 17:32:57.055604] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:05.057 [2024-12-06 17:32:57.057643] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:05.057 [2024-12-06 17:32:57.058492] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:05.057 [2024-12-06 17:32:57.059500] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:05.057 [2024-12-06 17:32:57.060495] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:05.057 [2024-12-06 17:32:57.060559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:05.057 [2024-12-06 17:32:57.061511] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:05.057 [2024-12-06 17:32:57.061519] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:05.057 [2024-12-06 17:32:57.061523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:05.057 [2024-12-06 17:32:57.061543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061558] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:05.057 [2024-12-06 17:32:57.061562] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:05.057 [2024-12-06 17:32:57.061565] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.057 [2024-12-06 17:32:57.061576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061618] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:05.057 [2024-12-06 17:32:57.061624] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:05.057 [2024-12-06 17:32:57.061627] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:05.057 [2024-12-06 17:32:57.061630] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:05.057 [2024-12-06 17:32:57.061634] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:05.057 [2024-12-06 17:32:57.061642] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:05.057 [2024-12-06 17:32:57.061645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.057 [2024-12-06 17:32:57.061689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.057 [2024-12-06 17:32:57.061694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.057 [2024-12-06 17:32:57.061700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:05.057 [2024-12-06 17:32:57.061704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061727] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:05.057 [2024-12-06 17:32:57.061731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061811] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:05.057 [2024-12-06 17:32:57.061814] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:05.057 [2024-12-06 17:32:57.061817] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.057 [2024-12-06 17:32:57.061821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061843] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:05.057 [2024-12-06 17:32:57.061851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061862] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:05.057 [2024-12-06 17:32:57.061865] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:05.057 [2024-12-06 17:32:57.061868] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.057 [2024-12-06 17:32:57.061872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061912] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:05.057 [2024-12-06 17:32:57.061917] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:05.057 [2024-12-06 17:32:57.061919] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.057 [2024-12-06 17:32:57.061924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:05.057 [2024-12-06 17:32:57.061934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:05.057 [2024-12-06 17:32:57.061939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:05.057 [2024-12-06 17:32:57.061960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:05.058 [2024-12-06 17:32:57.061964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:05.058 [2024-12-06 17:32:57.061968] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:05.058 [2024-12-06 17:32:57.061971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:05.058 [2024-12-06 17:32:57.061975] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:05.058 [2024-12-06 17:32:57.061989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.061997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.062015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.062032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.062048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062058] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:05.058 [2024-12-06 17:32:57.062061] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:05.058 [2024-12-06 17:32:57.062063] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:05.058 [2024-12-06 17:32:57.062066] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:05.058 [2024-12-06 17:32:57.062068] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:05.058 [2024-12-06 17:32:57.062073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:05.058 [2024-12-06 17:32:57.062079] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:05.058 [2024-12-06 17:32:57.062082] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:05.058 [2024-12-06 17:32:57.062085] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.058 [2024-12-06 17:32:57.062089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.062094] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:05.058 [2024-12-06 17:32:57.062097] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:05.058 [2024-12-06 17:32:57.062100] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.058 [2024-12-06 17:32:57.062104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.062110] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:05.058 [2024-12-06 17:32:57.062113] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:05.058 [2024-12-06 17:32:57.062115] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:05.058 [2024-12-06 17:32:57.062120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:05.058 [2024-12-06 17:32:57.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:05.058 [2024-12-06 17:32:57.062146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:05.058 ===================================================== 00:17:05.058 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:05.058 ===================================================== 00:17:05.058 Controller Capabilities/Features 00:17:05.058 ================================ 00:17:05.058 Vendor ID: 4e58 00:17:05.058 Subsystem Vendor ID: 4e58 00:17:05.058 Serial Number: SPDK1 00:17:05.058 Model Number: SPDK bdev Controller 00:17:05.058 Firmware Version: 25.01 00:17:05.058 Recommended Arb Burst: 6 00:17:05.058 IEEE OUI Identifier: 8d 6b 50 00:17:05.058 Multi-path I/O 00:17:05.058 May have multiple subsystem ports: Yes 00:17:05.058 May have multiple controllers: Yes 00:17:05.058 Associated with SR-IOV VF: No 00:17:05.058 Max Data Transfer Size: 131072 00:17:05.058 Max Number of Namespaces: 32 00:17:05.058 Max Number of I/O Queues: 127 00:17:05.058 NVMe Specification Version (VS): 1.3 00:17:05.058 NVMe Specification Version (Identify): 1.3 00:17:05.058 Maximum Queue Entries: 256 00:17:05.058 Contiguous Queues Required: Yes 00:17:05.058 Arbitration Mechanisms Supported 00:17:05.058 Weighted Round Robin: Not Supported 00:17:05.058 Vendor Specific: Not Supported 00:17:05.058 Reset Timeout: 15000 ms 00:17:05.058 Doorbell Stride: 4 bytes 00:17:05.058 NVM Subsystem Reset: Not Supported 00:17:05.058 Command Sets Supported 00:17:05.058 NVM Command Set: Supported 00:17:05.058 Boot Partition: Not Supported 00:17:05.058 Memory Page Size Minimum: 4096 bytes 00:17:05.058 Memory Page Size Maximum: 4096 bytes 00:17:05.058 Persistent Memory Region: Not Supported 00:17:05.058 Optional Asynchronous Events Supported 00:17:05.058 Namespace Attribute Notices: Supported 00:17:05.058 Firmware Activation Notices: Not Supported 00:17:05.058 ANA Change Notices: Not Supported 00:17:05.058 PLE Aggregate Log Change Notices: Not Supported 00:17:05.058 LBA Status Info Alert Notices: Not Supported 00:17:05.058 EGE Aggregate Log Change Notices: Not Supported 00:17:05.058 Normal NVM Subsystem Shutdown event: Not Supported 00:17:05.058 Zone Descriptor Change Notices: Not Supported 00:17:05.058 Discovery Log Change Notices: Not Supported 00:17:05.058 Controller Attributes 00:17:05.058 128-bit Host Identifier: Supported 00:17:05.058 Non-Operational Permissive Mode: Not Supported 00:17:05.058 NVM Sets: Not Supported 00:17:05.058 Read Recovery Levels: Not Supported 00:17:05.058 Endurance Groups: Not Supported 00:17:05.058 Predictable Latency Mode: Not Supported 00:17:05.058 Traffic Based Keep ALive: Not Supported 00:17:05.058 Namespace Granularity: Not Supported 00:17:05.058 SQ Associations: Not Supported 00:17:05.058 UUID List: Not Supported 00:17:05.058 Multi-Domain Subsystem: Not Supported 00:17:05.058 Fixed Capacity Management: Not Supported 00:17:05.058 Variable Capacity Management: Not Supported 00:17:05.058 Delete Endurance Group: Not Supported 00:17:05.058 Delete NVM Set: Not Supported 00:17:05.058 Extended LBA Formats Supported: Not Supported 00:17:05.058 Flexible Data Placement Supported: Not Supported 00:17:05.058 00:17:05.058 Controller Memory Buffer Support 00:17:05.058 ================================ 00:17:05.058 Supported: No 00:17:05.058 00:17:05.058 Persistent Memory Region Support 00:17:05.058 ================================ 00:17:05.058 Supported: No 00:17:05.058 00:17:05.058 Admin Command Set Attributes 00:17:05.058 ============================ 00:17:05.058 Security Send/Receive: Not Supported 00:17:05.058 Format NVM: Not Supported 00:17:05.058 Firmware Activate/Download: Not Supported 00:17:05.058 Namespace Management: Not Supported 00:17:05.058 Device Self-Test: Not Supported 00:17:05.058 Directives: Not Supported 00:17:05.058 NVMe-MI: Not Supported 00:17:05.058 Virtualization Management: Not Supported 00:17:05.058 Doorbell Buffer Config: Not Supported 00:17:05.058 Get LBA Status Capability: Not Supported 00:17:05.058 Command & Feature Lockdown Capability: Not Supported 00:17:05.058 Abort Command Limit: 4 00:17:05.058 Async Event Request Limit: 4 00:17:05.058 Number of Firmware Slots: N/A 00:17:05.058 Firmware Slot 1 Read-Only: N/A 00:17:05.058 Firmware Activation Without Reset: N/A 00:17:05.058 Multiple Update Detection Support: N/A 00:17:05.058 Firmware Update Granularity: No Information Provided 00:17:05.058 Per-Namespace SMART Log: No 00:17:05.058 Asymmetric Namespace Access Log Page: Not Supported 00:17:05.058 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:05.058 Command Effects Log Page: Supported 00:17:05.058 Get Log Page Extended Data: Supported 00:17:05.058 Telemetry Log Pages: Not Supported 00:17:05.058 Persistent Event Log Pages: Not Supported 00:17:05.058 Supported Log Pages Log Page: May Support 00:17:05.058 Commands Supported & Effects Log Page: Not Supported 00:17:05.058 Feature Identifiers & Effects Log Page:May Support 00:17:05.058 NVMe-MI Commands & Effects Log Page: May Support 00:17:05.058 Data Area 4 for Telemetry Log: Not Supported 00:17:05.058 Error Log Page Entries Supported: 128 00:17:05.058 Keep Alive: Supported 00:17:05.058 Keep Alive Granularity: 10000 ms 00:17:05.058 00:17:05.058 NVM Command Set Attributes 00:17:05.058 ========================== 00:17:05.058 Submission Queue Entry Size 00:17:05.058 Max: 64 00:17:05.058 Min: 64 00:17:05.058 Completion Queue Entry Size 00:17:05.058 Max: 16 00:17:05.058 Min: 16 00:17:05.058 Number of Namespaces: 32 00:17:05.058 Compare Command: Supported 00:17:05.059 Write Uncorrectable Command: Not Supported 00:17:05.059 Dataset Management Command: Supported 00:17:05.059 Write Zeroes Command: Supported 00:17:05.059 Set Features Save Field: Not Supported 00:17:05.059 Reservations: Not Supported 00:17:05.059 Timestamp: Not Supported 00:17:05.059 Copy: Supported 00:17:05.059 Volatile Write Cache: Present 00:17:05.059 Atomic Write Unit (Normal): 1 00:17:05.059 Atomic Write Unit (PFail): 1 00:17:05.059 Atomic Compare & Write Unit: 1 00:17:05.059 Fused Compare & Write: Supported 00:17:05.059 Scatter-Gather List 00:17:05.059 SGL Command Set: Supported (Dword aligned) 00:17:05.059 SGL Keyed: Not Supported 00:17:05.059 SGL Bit Bucket Descriptor: Not Supported 00:17:05.059 SGL Metadata Pointer: Not Supported 00:17:05.059 Oversized SGL: Not Supported 00:17:05.059 SGL Metadata Address: Not Supported 00:17:05.059 SGL Offset: Not Supported 00:17:05.059 Transport SGL Data Block: Not Supported 00:17:05.059 Replay Protected Memory Block: Not Supported 00:17:05.059 00:17:05.059 Firmware Slot Information 00:17:05.059 ========================= 00:17:05.059 Active slot: 1 00:17:05.059 Slot 1 Firmware Revision: 25.01 00:17:05.059 00:17:05.059 00:17:05.059 Commands Supported and Effects 00:17:05.059 ============================== 00:17:05.059 Admin Commands 00:17:05.059 -------------- 00:17:05.059 Get Log Page (02h): Supported 00:17:05.059 Identify (06h): Supported 00:17:05.059 Abort (08h): Supported 00:17:05.059 Set Features (09h): Supported 00:17:05.059 Get Features (0Ah): Supported 00:17:05.059 Asynchronous Event Request (0Ch): Supported 00:17:05.059 Keep Alive (18h): Supported 00:17:05.059 I/O Commands 00:17:05.059 ------------ 00:17:05.059 Flush (00h): Supported LBA-Change 00:17:05.059 Write (01h): Supported LBA-Change 00:17:05.059 Read (02h): Supported 00:17:05.059 Compare (05h): Supported 00:17:05.059 Write Zeroes (08h): Supported LBA-Change 00:17:05.059 Dataset Management (09h): Supported LBA-Change 00:17:05.059 Copy (19h): Supported LBA-Change 00:17:05.059 00:17:05.059 Error Log 00:17:05.059 ========= 00:17:05.059 00:17:05.059 Arbitration 00:17:05.059 =========== 00:17:05.059 Arbitration Burst: 1 00:17:05.059 00:17:05.059 Power Management 00:17:05.059 ================ 00:17:05.059 Number of Power States: 1 00:17:05.059 Current Power State: Power State #0 00:17:05.059 Power State #0: 00:17:05.059 Max Power: 0.00 W 00:17:05.059 Non-Operational State: Operational 00:17:05.059 Entry Latency: Not Reported 00:17:05.059 Exit Latency: Not Reported 00:17:05.059 Relative Read Throughput: 0 00:17:05.059 Relative Read Latency: 0 00:17:05.059 Relative Write Throughput: 0 00:17:05.059 Relative Write Latency: 0 00:17:05.059 Idle Power: Not Reported 00:17:05.059 Active Power: Not Reported 00:17:05.059 Non-Operational Permissive Mode: Not Supported 00:17:05.059 00:17:05.059 Health Information 00:17:05.059 ================== 00:17:05.059 Critical Warnings: 00:17:05.059 Available Spare Space: OK 00:17:05.059 Temperature: OK 00:17:05.059 Device Reliability: OK 00:17:05.059 Read Only: No 00:17:05.059 Volatile Memory Backup: OK 00:17:05.059 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:05.059 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:05.059 Available Spare: 0% 00:17:05.059 Available Sp[2024-12-06 17:32:57.062219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:05.059 [2024-12-06 17:32:57.062229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:05.059 [2024-12-06 17:32:57.062251] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:05.059 [2024-12-06 17:32:57.062258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.059 [2024-12-06 17:32:57.062263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.059 [2024-12-06 17:32:57.062267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.059 [2024-12-06 17:32:57.062272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.059 [2024-12-06 17:32:57.062517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:05.059 [2024-12-06 17:32:57.062525] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:05.059 [2024-12-06 17:32:57.063524] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:05.059 [2024-12-06 17:32:57.063563] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:05.059 [2024-12-06 17:32:57.063569] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:05.059 [2024-12-06 17:32:57.064531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:05.059 [2024-12-06 17:32:57.064539] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:05.059 [2024-12-06 17:32:57.064589] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:05.059 [2024-12-06 17:32:57.066645] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:05.059 are Threshold: 0% 00:17:05.059 Life Percentage Used: 0% 00:17:05.059 Data Units Read: 0 00:17:05.059 Data Units Written: 0 00:17:05.059 Host Read Commands: 0 00:17:05.059 Host Write Commands: 0 00:17:05.059 Controller Busy Time: 0 minutes 00:17:05.059 Power Cycles: 0 00:17:05.059 Power On Hours: 0 hours 00:17:05.059 Unsafe Shutdowns: 0 00:17:05.059 Unrecoverable Media Errors: 0 00:17:05.059 Lifetime Error Log Entries: 0 00:17:05.059 Warning Temperature Time: 0 minutes 00:17:05.059 Critical Temperature Time: 0 minutes 00:17:05.059 00:17:05.059 Number of Queues 00:17:05.059 ================ 00:17:05.059 Number of I/O Submission Queues: 127 00:17:05.059 Number of I/O Completion Queues: 127 00:17:05.059 00:17:05.059 Active Namespaces 00:17:05.059 ================= 00:17:05.059 Namespace ID:1 00:17:05.059 Error Recovery Timeout: Unlimited 00:17:05.059 Command Set Identifier: NVM (00h) 00:17:05.059 Deallocate: Supported 00:17:05.059 Deallocated/Unwritten Error: Not Supported 00:17:05.059 Deallocated Read Value: Unknown 00:17:05.059 Deallocate in Write Zeroes: Not Supported 00:17:05.059 Deallocated Guard Field: 0xFFFF 00:17:05.059 Flush: Supported 00:17:05.059 Reservation: Supported 00:17:05.059 Namespace Sharing Capabilities: Multiple Controllers 00:17:05.059 Size (in LBAs): 131072 (0GiB) 00:17:05.059 Capacity (in LBAs): 131072 (0GiB) 00:17:05.059 Utilization (in LBAs): 131072 (0GiB) 00:17:05.059 NGUID: 2C29EF9C28F849E9A254AF3F69722AAB 00:17:05.059 UUID: 2c29ef9c-28f8-49e9-a254-af3f69722aab 00:17:05.059 Thin Provisioning: Not Supported 00:17:05.059 Per-NS Atomic Units: Yes 00:17:05.059 Atomic Boundary Size (Normal): 0 00:17:05.059 Atomic Boundary Size (PFail): 0 00:17:05.059 Atomic Boundary Offset: 0 00:17:05.059 Maximum Single Source Range Length: 65535 00:17:05.059 Maximum Copy Length: 65535 00:17:05.059 Maximum Source Range Count: 1 00:17:05.059 NGUID/EUI64 Never Reused: No 00:17:05.059 Namespace Write Protected: No 00:17:05.059 Number of LBA Formats: 1 00:17:05.059 Current LBA Format: LBA Format #00 00:17:05.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:05.059 00:17:05.059 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:05.320 [2024-12-06 17:32:57.258419] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:10.609 Initializing NVMe Controllers 00:17:10.609 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:10.609 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:10.609 Initialization complete. Launching workers. 00:17:10.609 ======================================================== 00:17:10.609 Latency(us) 00:17:10.609 Device Information : IOPS MiB/s Average min max 00:17:10.609 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40008.88 156.28 3199.16 866.82 6780.97 00:17:10.609 ======================================================== 00:17:10.609 Total : 40008.88 156.28 3199.16 866.82 6780.97 00:17:10.609 00:17:10.609 [2024-12-06 17:33:02.275577] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:10.609 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:10.609 [2024-12-06 17:33:02.470453] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:16.032 Initializing NVMe Controllers 00:17:16.032 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:16.032 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:16.032 Initialization complete. Launching workers. 00:17:16.032 ======================================================== 00:17:16.032 Latency(us) 00:17:16.032 Device Information : IOPS MiB/s Average min max 00:17:16.032 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16002.79 62.51 8004.18 5400.85 14842.97 00:17:16.032 ======================================================== 00:17:16.032 Total : 16002.79 62.51 8004.18 5400.85 14842.97 00:17:16.032 00:17:16.032 [2024-12-06 17:33:07.515518] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:16.032 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:16.032 [2024-12-06 17:33:07.716376] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:21.316 [2024-12-06 17:33:12.802964] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:21.316 Initializing NVMe Controllers 00:17:21.316 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:21.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:21.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:21.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:21.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:21.316 Initialization complete. Launching workers. 00:17:21.316 Starting thread on core 2 00:17:21.316 Starting thread on core 3 00:17:21.316 Starting thread on core 1 00:17:21.317 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:21.317 [2024-12-06 17:33:13.051963] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:24.615 [2024-12-06 17:33:16.108020] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:24.615 Initializing NVMe Controllers 00:17:24.615 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.615 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:24.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:24.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:24.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:24.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:24.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:24.615 Initialization complete. Launching workers. 00:17:24.615 Starting thread on core 1 with urgent priority queue 00:17:24.615 Starting thread on core 2 with urgent priority queue 00:17:24.615 Starting thread on core 3 with urgent priority queue 00:17:24.615 Starting thread on core 0 with urgent priority queue 00:17:24.615 SPDK bdev Controller (SPDK1 ) core 0: 15221.00 IO/s 6.57 secs/100000 ios 00:17:24.615 SPDK bdev Controller (SPDK1 ) core 1: 12367.33 IO/s 8.09 secs/100000 ios 00:17:24.615 SPDK bdev Controller (SPDK1 ) core 2: 12250.00 IO/s 8.16 secs/100000 ios 00:17:24.615 SPDK bdev Controller (SPDK1 ) core 3: 12175.33 IO/s 8.21 secs/100000 ios 00:17:24.615 ======================================================== 00:17:24.615 00:17:24.615 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:24.615 [2024-12-06 17:33:16.345020] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:24.615 Initializing NVMe Controllers 00:17:24.615 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.615 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.615 Namespace ID: 1 size: 0GB 00:17:24.615 Initialization complete. 00:17:24.615 INFO: using host memory buffer for IO 00:17:24.615 Hello world! 00:17:24.615 [2024-12-06 17:33:16.379227] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:24.615 17:33:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:24.615 [2024-12-06 17:33:16.622017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.998 Initializing NVMe Controllers 00:17:25.998 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.998 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.998 Initialization complete. Launching workers. 00:17:25.998 submit (in ns) avg, min, max = 6012.1, 2837.5, 3998060.0 00:17:25.998 complete (in ns) avg, min, max = 18259.6, 1623.3, 4051574.2 00:17:25.998 00:17:25.998 Submit histogram 00:17:25.998 ================ 00:17:25.998 Range in us Cumulative Count 00:17:25.998 2.827 - 2.840: 0.0101% ( 2) 00:17:25.998 2.840 - 2.853: 0.7732% ( 151) 00:17:25.998 2.853 - 2.867: 2.1782% ( 278) 00:17:25.998 2.867 - 2.880: 5.6249% ( 682) 00:17:25.998 2.880 - 2.893: 10.5524% ( 975) 00:17:25.998 2.893 - 2.907: 16.3997% ( 1157) 00:17:25.998 2.907 - 2.920: 22.1863% ( 1145) 00:17:25.998 2.920 - 2.933: 28.2559% ( 1201) 00:17:25.998 2.933 - 2.947: 34.4772% ( 1231) 00:17:25.998 2.947 - 2.960: 39.3137% ( 957) 00:17:25.998 2.960 - 2.973: 44.3170% ( 990) 00:17:25.998 2.973 - 2.987: 49.7094% ( 1067) 00:17:25.998 2.987 - 3.000: 57.3659% ( 1515) 00:17:25.998 3.000 - 3.013: 66.4679% ( 1801) 00:17:25.998 3.013 - 3.027: 76.2117% ( 1928) 00:17:25.998 3.027 - 3.040: 84.0299% ( 1547) 00:17:25.998 3.040 - 3.053: 90.2107% ( 1223) 00:17:25.998 3.053 - 3.067: 94.1729% ( 784) 00:17:25.998 3.067 - 3.080: 96.6796% ( 496) 00:17:25.998 3.080 - 3.093: 98.1806% ( 297) 00:17:25.998 3.093 - 3.107: 99.0095% ( 164) 00:17:25.998 3.107 - 3.120: 99.3632% ( 70) 00:17:25.998 3.120 - 3.133: 99.4845% ( 24) 00:17:25.998 3.133 - 3.147: 99.5350% ( 10) 00:17:25.998 3.147 - 3.160: 99.5704% ( 7) 00:17:25.998 3.160 - 3.173: 99.5755% ( 1) 00:17:25.998 3.173 - 3.187: 99.5805% ( 1) 00:17:25.998 3.187 - 3.200: 99.5856% ( 1) 00:17:25.998 3.400 - 3.413: 99.5906% ( 1) 00:17:25.998 3.413 - 3.440: 99.5957% ( 1) 00:17:25.998 3.573 - 3.600: 99.6007% ( 1) 00:17:25.998 3.600 - 3.627: 99.6058% ( 1) 00:17:25.998 3.840 - 3.867: 99.6109% ( 1) 00:17:25.998 4.027 - 4.053: 99.6159% ( 1) 00:17:25.998 4.187 - 4.213: 99.6210% ( 1) 00:17:25.998 4.240 - 4.267: 99.6260% ( 1) 00:17:25.998 4.267 - 4.293: 99.6311% ( 1) 00:17:25.998 4.347 - 4.373: 99.6412% ( 2) 00:17:25.998 4.480 - 4.507: 99.6462% ( 1) 00:17:25.998 4.533 - 4.560: 99.6513% ( 1) 00:17:25.998 4.587 - 4.613: 99.6563% ( 1) 00:17:25.998 4.613 - 4.640: 99.6664% ( 2) 00:17:25.998 4.667 - 4.693: 99.6715% ( 1) 00:17:25.998 4.693 - 4.720: 99.6816% ( 2) 00:17:25.998 4.747 - 4.773: 99.6867% ( 1) 00:17:25.998 4.773 - 4.800: 99.7069% ( 4) 00:17:25.998 4.800 - 4.827: 99.7170% ( 2) 00:17:25.998 4.827 - 4.853: 99.7271% ( 2) 00:17:25.998 4.853 - 4.880: 99.7423% ( 3) 00:17:25.998 4.880 - 4.907: 99.7473% ( 1) 00:17:25.998 4.907 - 4.933: 99.7574% ( 2) 00:17:25.998 4.960 - 4.987: 99.7625% ( 1) 00:17:25.998 4.987 - 5.013: 99.7726% ( 2) 00:17:25.998 5.013 - 5.040: 99.7776% ( 1) 00:17:25.998 5.040 - 5.067: 99.7877% ( 2) 00:17:25.998 5.067 - 5.093: 99.7928% ( 1) 00:17:25.998 5.120 - 5.147: 99.7978% ( 1) 00:17:25.998 5.147 - 5.173: 99.8080% ( 2) 00:17:25.998 5.280 - 5.307: 99.8130% ( 1) 00:17:25.998 5.333 - 5.360: 99.8181% ( 1) 00:17:25.998 5.493 - 5.520: 99.8231% ( 1) 00:17:25.998 5.547 - 5.573: 99.8282% ( 1) 00:17:25.998 5.573 - 5.600: 99.8332% ( 1) 00:17:25.998 5.627 - 5.653: 99.8383% ( 1) 00:17:25.998 5.707 - 5.733: 99.8433% ( 1) 00:17:25.998 5.867 - 5.893: 99.8484% ( 1) 00:17:25.998 5.893 - 5.920: 99.8534% ( 1) 00:17:25.998 6.053 - 6.080: 99.8635% ( 2) 00:17:25.998 6.160 - 6.187: 99.8686% ( 1) 00:17:25.998 6.187 - 6.213: 99.8737% ( 1) 00:17:25.998 6.293 - 6.320: 99.8787% ( 1) 00:17:25.998 6.427 - 6.453: 99.8838% ( 1) 00:17:25.998 6.480 - 6.507: 99.8888% ( 1) 00:17:25.998 6.560 - 6.587: 99.8939% ( 1) 00:17:25.998 6.720 - 6.747: 99.8989% ( 1) 00:17:25.998 6.747 - 6.773: 99.9040% ( 1) 00:17:25.998 6.987 - 7.040: 99.9090% ( 1) 00:17:25.998 7.253 - 7.307: 99.9141% ( 1) 00:17:25.998 8.480 - 8.533: 99.9191% ( 1) 00:17:25.998 [2024-12-06 17:33:17.642913] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.998 8.907 - 8.960: 99.9242% ( 1) 00:17:25.998 3986.773 - 4014.080: 100.0000% ( 15) 00:17:25.998 00:17:25.998 Complete histogram 00:17:25.998 ================== 00:17:25.998 Range in us Cumulative Count 00:17:25.998 1.620 - 1.627: 0.0051% ( 1) 00:17:25.998 1.633 - 1.640: 0.1263% ( 24) 00:17:25.998 1.640 - 1.647: 0.7935% ( 132) 00:17:25.998 1.647 - 1.653: 0.8389% ( 9) 00:17:25.998 1.653 - 1.660: 0.9703% ( 26) 00:17:25.998 1.660 - 1.667: 1.0360% ( 13) 00:17:25.998 1.667 - 1.673: 1.0765% ( 8) 00:17:25.998 1.673 - 1.680: 10.5322% ( 1871) 00:17:25.998 1.680 - 1.687: 46.5002% ( 7117) 00:17:25.998 1.687 - 1.693: 48.6734% ( 430) 00:17:25.998 1.693 - 1.700: 62.7887% ( 2793) 00:17:25.998 1.700 - 1.707: 76.3936% ( 2692) 00:17:25.998 1.707 - 1.720: 83.3578% ( 1378) 00:17:25.998 1.720 - 1.733: 84.4140% ( 209) 00:17:25.998 1.733 - 1.747: 87.8203% ( 674) 00:17:25.998 1.747 - 1.760: 92.7073% ( 967) 00:17:25.998 1.760 - 1.773: 96.9071% ( 831) 00:17:25.998 1.773 - 1.787: 98.7618% ( 367) 00:17:25.998 1.787 - 1.800: 99.3026% ( 107) 00:17:25.998 1.800 - 1.813: 99.3632% ( 12) 00:17:25.998 1.813 - 1.827: 99.3885% ( 5) 00:17:25.998 1.840 - 1.853: 99.3935% ( 1) 00:17:25.998 1.867 - 1.880: 99.3986% ( 1) 00:17:25.998 3.333 - 3.347: 99.4036% ( 1) 00:17:25.998 3.347 - 3.360: 99.4087% ( 1) 00:17:25.998 3.413 - 3.440: 99.4239% ( 3) 00:17:25.998 3.440 - 3.467: 99.4289% ( 1) 00:17:25.998 3.467 - 3.493: 99.4340% ( 1) 00:17:25.998 3.573 - 3.600: 99.4390% ( 1) 00:17:25.998 3.600 - 3.627: 99.4441% ( 1) 00:17:25.999 3.627 - 3.653: 99.4491% ( 1) 00:17:25.999 3.653 - 3.680: 99.4592% ( 2) 00:17:25.999 3.680 - 3.707: 99.4643% ( 1) 00:17:25.999 3.707 - 3.733: 99.4744% ( 2) 00:17:25.999 3.787 - 3.813: 99.4795% ( 1) 00:17:25.999 3.813 - 3.840: 99.4845% ( 1) 00:17:25.999 3.893 - 3.920: 99.4896% ( 1) 00:17:25.999 4.000 - 4.027: 99.4946% ( 1) 00:17:25.999 4.027 - 4.053: 99.4997% ( 1) 00:17:25.999 4.080 - 4.107: 99.5047% ( 1) 00:17:25.999 4.213 - 4.240: 99.5098% ( 1) 00:17:25.999 4.427 - 4.453: 99.5199% ( 2) 00:17:25.999 4.480 - 4.507: 99.5249% ( 1) 00:17:25.999 4.587 - 4.613: 99.5300% ( 1) 00:17:25.999 4.693 - 4.720: 99.5350% ( 1) 00:17:25.999 4.720 - 4.747: 99.5401% ( 1) 00:17:25.999 4.800 - 4.827: 99.5452% ( 1) 00:17:25.999 4.907 - 4.933: 99.5502% ( 1) 00:17:25.999 5.067 - 5.093: 99.5553% ( 1) 00:17:25.999 5.120 - 5.147: 99.5603% ( 1) 00:17:25.999 5.573 - 5.600: 99.5654% ( 1) 00:17:25.999 5.840 - 5.867: 99.5704% ( 1) 00:17:25.999 6.373 - 6.400: 99.5755% ( 1) 00:17:25.999 9.173 - 9.227: 99.5805% ( 1) 00:17:25.999 9.440 - 9.493: 99.5856% ( 1) 00:17:25.999 3986.773 - 4014.080: 99.9949% ( 81) 00:17:25.999 4041.387 - 4068.693: 100.0000% ( 1) 00:17:25.999 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.999 [ 00:17:25.999 { 00:17:25.999 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.999 "subtype": "Discovery", 00:17:25.999 "listen_addresses": [], 00:17:25.999 "allow_any_host": true, 00:17:25.999 "hosts": [] 00:17:25.999 }, 00:17:25.999 { 00:17:25.999 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.999 "subtype": "NVMe", 00:17:25.999 "listen_addresses": [ 00:17:25.999 { 00:17:25.999 "trtype": "VFIOUSER", 00:17:25.999 "adrfam": "IPv4", 00:17:25.999 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.999 "trsvcid": "0" 00:17:25.999 } 00:17:25.999 ], 00:17:25.999 "allow_any_host": true, 00:17:25.999 "hosts": [], 00:17:25.999 "serial_number": "SPDK1", 00:17:25.999 "model_number": "SPDK bdev Controller", 00:17:25.999 "max_namespaces": 32, 00:17:25.999 "min_cntlid": 1, 00:17:25.999 "max_cntlid": 65519, 00:17:25.999 "namespaces": [ 00:17:25.999 { 00:17:25.999 "nsid": 1, 00:17:25.999 "bdev_name": "Malloc1", 00:17:25.999 "name": "Malloc1", 00:17:25.999 "nguid": "2C29EF9C28F849E9A254AF3F69722AAB", 00:17:25.999 "uuid": "2c29ef9c-28f8-49e9-a254-af3f69722aab" 00:17:25.999 } 00:17:25.999 ] 00:17:25.999 }, 00:17:25.999 { 00:17:25.999 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.999 "subtype": "NVMe", 00:17:25.999 "listen_addresses": [ 00:17:25.999 { 00:17:25.999 "trtype": "VFIOUSER", 00:17:25.999 "adrfam": "IPv4", 00:17:25.999 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.999 "trsvcid": "0" 00:17:25.999 } 00:17:25.999 ], 00:17:25.999 "allow_any_host": true, 00:17:25.999 "hosts": [], 00:17:25.999 "serial_number": "SPDK2", 00:17:25.999 "model_number": "SPDK bdev Controller", 00:17:25.999 "max_namespaces": 32, 00:17:25.999 "min_cntlid": 1, 00:17:25.999 "max_cntlid": 65519, 00:17:25.999 "namespaces": [ 00:17:25.999 { 00:17:25.999 "nsid": 1, 00:17:25.999 "bdev_name": "Malloc2", 00:17:25.999 "name": "Malloc2", 00:17:25.999 "nguid": "19597A747D824E189616A1ACD3BBD8A5", 00:17:25.999 "uuid": "19597a74-7d82-4e18-9616-a1acd3bbd8a5" 00:17:25.999 } 00:17:25.999 ] 00:17:25.999 } 00:17:25.999 ] 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1654013 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:25.999 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:25.999 [2024-12-06 17:33:18.011006] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.999 Malloc3 00:17:25.999 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:26.259 [2024-12-06 17:33:18.215443] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:26.259 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:26.259 Asynchronous Event Request test 00:17:26.259 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:26.259 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:26.259 Registering asynchronous event callbacks... 00:17:26.259 Starting namespace attribute notice tests for all controllers... 00:17:26.260 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:26.260 aer_cb - Changed Namespace 00:17:26.260 Cleaning up... 00:17:26.521 [ 00:17:26.521 { 00:17:26.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:26.521 "subtype": "Discovery", 00:17:26.522 "listen_addresses": [], 00:17:26.522 "allow_any_host": true, 00:17:26.522 "hosts": [] 00:17:26.522 }, 00:17:26.522 { 00:17:26.522 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:26.522 "subtype": "NVMe", 00:17:26.522 "listen_addresses": [ 00:17:26.522 { 00:17:26.522 "trtype": "VFIOUSER", 00:17:26.522 "adrfam": "IPv4", 00:17:26.522 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:26.522 "trsvcid": "0" 00:17:26.522 } 00:17:26.522 ], 00:17:26.522 "allow_any_host": true, 00:17:26.522 "hosts": [], 00:17:26.522 "serial_number": "SPDK1", 00:17:26.522 "model_number": "SPDK bdev Controller", 00:17:26.522 "max_namespaces": 32, 00:17:26.522 "min_cntlid": 1, 00:17:26.522 "max_cntlid": 65519, 00:17:26.522 "namespaces": [ 00:17:26.522 { 00:17:26.522 "nsid": 1, 00:17:26.522 "bdev_name": "Malloc1", 00:17:26.522 "name": "Malloc1", 00:17:26.522 "nguid": "2C29EF9C28F849E9A254AF3F69722AAB", 00:17:26.522 "uuid": "2c29ef9c-28f8-49e9-a254-af3f69722aab" 00:17:26.522 }, 00:17:26.522 { 00:17:26.522 "nsid": 2, 00:17:26.522 "bdev_name": "Malloc3", 00:17:26.522 "name": "Malloc3", 00:17:26.522 "nguid": "436A159B9D1C4DF792A83677A8AB6BCC", 00:17:26.522 "uuid": "436a159b-9d1c-4df7-92a8-3677a8ab6bcc" 00:17:26.522 } 00:17:26.522 ] 00:17:26.522 }, 00:17:26.522 { 00:17:26.522 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:26.522 "subtype": "NVMe", 00:17:26.522 "listen_addresses": [ 00:17:26.522 { 00:17:26.522 "trtype": "VFIOUSER", 00:17:26.522 "adrfam": "IPv4", 00:17:26.522 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:26.522 "trsvcid": "0" 00:17:26.522 } 00:17:26.522 ], 00:17:26.522 "allow_any_host": true, 00:17:26.522 "hosts": [], 00:17:26.522 "serial_number": "SPDK2", 00:17:26.522 "model_number": "SPDK bdev Controller", 00:17:26.522 "max_namespaces": 32, 00:17:26.522 "min_cntlid": 1, 00:17:26.522 "max_cntlid": 65519, 00:17:26.522 "namespaces": [ 00:17:26.522 { 00:17:26.522 "nsid": 1, 00:17:26.522 "bdev_name": "Malloc2", 00:17:26.522 "name": "Malloc2", 00:17:26.522 "nguid": "19597A747D824E189616A1ACD3BBD8A5", 00:17:26.522 "uuid": "19597a74-7d82-4e18-9616-a1acd3bbd8a5" 00:17:26.522 } 00:17:26.522 ] 00:17:26.522 } 00:17:26.522 ] 00:17:26.522 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1654013 00:17:26.522 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:26.522 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:26.522 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:26.522 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:26.522 [2024-12-06 17:33:18.437850] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:17:26.522 [2024-12-06 17:33:18.437889] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654023 ] 00:17:26.522 [2024-12-06 17:33:18.477891] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:26.522 [2024-12-06 17:33:18.480066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:26.522 [2024-12-06 17:33:18.480086] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb90debc000 00:17:26.522 [2024-12-06 17:33:18.481074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.482082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.483087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.484092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.485099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.486107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.487114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.488124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.522 [2024-12-06 17:33:18.489133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:26.522 [2024-12-06 17:33:18.489141] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb90deb1000 00:17:26.522 [2024-12-06 17:33:18.490052] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.522 [2024-12-06 17:33:18.502919] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:26.522 [2024-12-06 17:33:18.502936] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:26.522 [2024-12-06 17:33:18.508017] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:26.522 [2024-12-06 17:33:18.508049] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:26.522 [2024-12-06 17:33:18.508112] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:26.522 [2024-12-06 17:33:18.508121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:26.522 [2024-12-06 17:33:18.508125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:26.522 [2024-12-06 17:33:18.509019] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:26.522 [2024-12-06 17:33:18.509026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:26.522 [2024-12-06 17:33:18.509032] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:26.522 [2024-12-06 17:33:18.510023] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:26.522 [2024-12-06 17:33:18.510033] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:26.522 [2024-12-06 17:33:18.510038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:26.522 [2024-12-06 17:33:18.511037] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:26.522 [2024-12-06 17:33:18.511043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:26.522 [2024-12-06 17:33:18.512040] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:26.522 [2024-12-06 17:33:18.512046] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:26.522 [2024-12-06 17:33:18.512050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:26.522 [2024-12-06 17:33:18.512055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:26.522 [2024-12-06 17:33:18.512161] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:26.522 [2024-12-06 17:33:18.512164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:26.522 [2024-12-06 17:33:18.512168] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:26.522 [2024-12-06 17:33:18.513045] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:26.522 [2024-12-06 17:33:18.514049] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:26.522 [2024-12-06 17:33:18.515054] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:26.522 [2024-12-06 17:33:18.516057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:26.522 [2024-12-06 17:33:18.516089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:26.522 [2024-12-06 17:33:18.517070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:26.522 [2024-12-06 17:33:18.517076] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:26.522 [2024-12-06 17:33:18.517080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:26.522 [2024-12-06 17:33:18.517095] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:26.522 [2024-12-06 17:33:18.517100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:26.522 [2024-12-06 17:33:18.517111] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.522 [2024-12-06 17:33:18.517115] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.522 [2024-12-06 17:33:18.517117] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.523 [2024-12-06 17:33:18.517126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.524644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.524653] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:26.523 [2024-12-06 17:33:18.524658] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:26.523 [2024-12-06 17:33:18.524661] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:26.523 [2024-12-06 17:33:18.524665] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:26.523 [2024-12-06 17:33:18.524668] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:26.523 [2024-12-06 17:33:18.524671] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:26.523 [2024-12-06 17:33:18.524675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.524681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.524688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.532642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.532652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.523 [2024-12-06 17:33:18.532658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.523 [2024-12-06 17:33:18.532664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.523 [2024-12-06 17:33:18.532670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.523 [2024-12-06 17:33:18.532674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.532680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.532687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.540642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.540648] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:26.523 [2024-12-06 17:33:18.540652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.540657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.540661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.540668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.548642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.548690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.548696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.548702] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:26.523 [2024-12-06 17:33:18.548705] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:26.523 [2024-12-06 17:33:18.548708] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.523 [2024-12-06 17:33:18.548712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.554758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.554767] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:26.523 [2024-12-06 17:33:18.554777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.554783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.554788] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.523 [2024-12-06 17:33:18.554791] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.523 [2024-12-06 17:33:18.554793] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.523 [2024-12-06 17:33:18.554798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.564642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.564653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.564658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.564664] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.523 [2024-12-06 17:33:18.564667] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.523 [2024-12-06 17:33:18.564669] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.523 [2024-12-06 17:33:18.564674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.572642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.572649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572678] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:26.523 [2024-12-06 17:33:18.572682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:26.523 [2024-12-06 17:33:18.572685] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:26.523 [2024-12-06 17:33:18.572699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:26.523 [2024-12-06 17:33:18.580643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:26.523 [2024-12-06 17:33:18.580653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:26.786 [2024-12-06 17:33:18.588643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:26.786 [2024-12-06 17:33:18.588653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:26.786 [2024-12-06 17:33:18.596641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:26.786 [2024-12-06 17:33:18.596651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.786 [2024-12-06 17:33:18.604644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:26.786 [2024-12-06 17:33:18.604657] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:26.786 [2024-12-06 17:33:18.604661] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:26.786 [2024-12-06 17:33:18.604663] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:26.786 [2024-12-06 17:33:18.604666] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:26.786 [2024-12-06 17:33:18.604668] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:26.786 [2024-12-06 17:33:18.604672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:26.786 [2024-12-06 17:33:18.604678] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:26.786 [2024-12-06 17:33:18.604681] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:26.786 [2024-12-06 17:33:18.604683] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.786 [2024-12-06 17:33:18.604688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:26.786 [2024-12-06 17:33:18.604693] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:26.786 [2024-12-06 17:33:18.604696] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.786 [2024-12-06 17:33:18.604698] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.786 [2024-12-06 17:33:18.604702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.786 [2024-12-06 17:33:18.604709] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:26.786 [2024-12-06 17:33:18.604713] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:26.786 [2024-12-06 17:33:18.604715] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.786 [2024-12-06 17:33:18.604719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:26.786 [2024-12-06 17:33:18.612645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:26.786 [2024-12-06 17:33:18.612657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:26.786 [2024-12-06 17:33:18.612664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:26.786 [2024-12-06 17:33:18.612670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:26.786 ===================================================== 00:17:26.786 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:26.786 ===================================================== 00:17:26.786 Controller Capabilities/Features 00:17:26.786 ================================ 00:17:26.786 Vendor ID: 4e58 00:17:26.786 Subsystem Vendor ID: 4e58 00:17:26.786 Serial Number: SPDK2 00:17:26.786 Model Number: SPDK bdev Controller 00:17:26.786 Firmware Version: 25.01 00:17:26.786 Recommended Arb Burst: 6 00:17:26.786 IEEE OUI Identifier: 8d 6b 50 00:17:26.786 Multi-path I/O 00:17:26.786 May have multiple subsystem ports: Yes 00:17:26.786 May have multiple controllers: Yes 00:17:26.786 Associated with SR-IOV VF: No 00:17:26.786 Max Data Transfer Size: 131072 00:17:26.786 Max Number of Namespaces: 32 00:17:26.786 Max Number of I/O Queues: 127 00:17:26.786 NVMe Specification Version (VS): 1.3 00:17:26.786 NVMe Specification Version (Identify): 1.3 00:17:26.786 Maximum Queue Entries: 256 00:17:26.786 Contiguous Queues Required: Yes 00:17:26.786 Arbitration Mechanisms Supported 00:17:26.786 Weighted Round Robin: Not Supported 00:17:26.786 Vendor Specific: Not Supported 00:17:26.786 Reset Timeout: 15000 ms 00:17:26.786 Doorbell Stride: 4 bytes 00:17:26.786 NVM Subsystem Reset: Not Supported 00:17:26.786 Command Sets Supported 00:17:26.786 NVM Command Set: Supported 00:17:26.786 Boot Partition: Not Supported 00:17:26.786 Memory Page Size Minimum: 4096 bytes 00:17:26.786 Memory Page Size Maximum: 4096 bytes 00:17:26.786 Persistent Memory Region: Not Supported 00:17:26.786 Optional Asynchronous Events Supported 00:17:26.786 Namespace Attribute Notices: Supported 00:17:26.786 Firmware Activation Notices: Not Supported 00:17:26.786 ANA Change Notices: Not Supported 00:17:26.786 PLE Aggregate Log Change Notices: Not Supported 00:17:26.786 LBA Status Info Alert Notices: Not Supported 00:17:26.786 EGE Aggregate Log Change Notices: Not Supported 00:17:26.786 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.786 Zone Descriptor Change Notices: Not Supported 00:17:26.786 Discovery Log Change Notices: Not Supported 00:17:26.786 Controller Attributes 00:17:26.786 128-bit Host Identifier: Supported 00:17:26.786 Non-Operational Permissive Mode: Not Supported 00:17:26.786 NVM Sets: Not Supported 00:17:26.786 Read Recovery Levels: Not Supported 00:17:26.786 Endurance Groups: Not Supported 00:17:26.786 Predictable Latency Mode: Not Supported 00:17:26.786 Traffic Based Keep ALive: Not Supported 00:17:26.786 Namespace Granularity: Not Supported 00:17:26.786 SQ Associations: Not Supported 00:17:26.787 UUID List: Not Supported 00:17:26.787 Multi-Domain Subsystem: Not Supported 00:17:26.787 Fixed Capacity Management: Not Supported 00:17:26.787 Variable Capacity Management: Not Supported 00:17:26.787 Delete Endurance Group: Not Supported 00:17:26.787 Delete NVM Set: Not Supported 00:17:26.787 Extended LBA Formats Supported: Not Supported 00:17:26.787 Flexible Data Placement Supported: Not Supported 00:17:26.787 00:17:26.787 Controller Memory Buffer Support 00:17:26.787 ================================ 00:17:26.787 Supported: No 00:17:26.787 00:17:26.787 Persistent Memory Region Support 00:17:26.787 ================================ 00:17:26.787 Supported: No 00:17:26.787 00:17:26.787 Admin Command Set Attributes 00:17:26.787 ============================ 00:17:26.787 Security Send/Receive: Not Supported 00:17:26.787 Format NVM: Not Supported 00:17:26.787 Firmware Activate/Download: Not Supported 00:17:26.787 Namespace Management: Not Supported 00:17:26.787 Device Self-Test: Not Supported 00:17:26.787 Directives: Not Supported 00:17:26.787 NVMe-MI: Not Supported 00:17:26.787 Virtualization Management: Not Supported 00:17:26.787 Doorbell Buffer Config: Not Supported 00:17:26.787 Get LBA Status Capability: Not Supported 00:17:26.787 Command & Feature Lockdown Capability: Not Supported 00:17:26.787 Abort Command Limit: 4 00:17:26.787 Async Event Request Limit: 4 00:17:26.787 Number of Firmware Slots: N/A 00:17:26.787 Firmware Slot 1 Read-Only: N/A 00:17:26.787 Firmware Activation Without Reset: N/A 00:17:26.787 Multiple Update Detection Support: N/A 00:17:26.787 Firmware Update Granularity: No Information Provided 00:17:26.787 Per-Namespace SMART Log: No 00:17:26.787 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.787 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:26.787 Command Effects Log Page: Supported 00:17:26.787 Get Log Page Extended Data: Supported 00:17:26.787 Telemetry Log Pages: Not Supported 00:17:26.787 Persistent Event Log Pages: Not Supported 00:17:26.787 Supported Log Pages Log Page: May Support 00:17:26.787 Commands Supported & Effects Log Page: Not Supported 00:17:26.787 Feature Identifiers & Effects Log Page:May Support 00:17:26.787 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.787 Data Area 4 for Telemetry Log: Not Supported 00:17:26.787 Error Log Page Entries Supported: 128 00:17:26.787 Keep Alive: Supported 00:17:26.787 Keep Alive Granularity: 10000 ms 00:17:26.787 00:17:26.787 NVM Command Set Attributes 00:17:26.787 ========================== 00:17:26.787 Submission Queue Entry Size 00:17:26.787 Max: 64 00:17:26.787 Min: 64 00:17:26.787 Completion Queue Entry Size 00:17:26.787 Max: 16 00:17:26.787 Min: 16 00:17:26.787 Number of Namespaces: 32 00:17:26.787 Compare Command: Supported 00:17:26.787 Write Uncorrectable Command: Not Supported 00:17:26.787 Dataset Management Command: Supported 00:17:26.787 Write Zeroes Command: Supported 00:17:26.787 Set Features Save Field: Not Supported 00:17:26.787 Reservations: Not Supported 00:17:26.787 Timestamp: Not Supported 00:17:26.787 Copy: Supported 00:17:26.787 Volatile Write Cache: Present 00:17:26.787 Atomic Write Unit (Normal): 1 00:17:26.787 Atomic Write Unit (PFail): 1 00:17:26.787 Atomic Compare & Write Unit: 1 00:17:26.787 Fused Compare & Write: Supported 00:17:26.787 Scatter-Gather List 00:17:26.787 SGL Command Set: Supported (Dword aligned) 00:17:26.787 SGL Keyed: Not Supported 00:17:26.787 SGL Bit Bucket Descriptor: Not Supported 00:17:26.787 SGL Metadata Pointer: Not Supported 00:17:26.787 Oversized SGL: Not Supported 00:17:26.787 SGL Metadata Address: Not Supported 00:17:26.787 SGL Offset: Not Supported 00:17:26.787 Transport SGL Data Block: Not Supported 00:17:26.787 Replay Protected Memory Block: Not Supported 00:17:26.787 00:17:26.787 Firmware Slot Information 00:17:26.787 ========================= 00:17:26.787 Active slot: 1 00:17:26.787 Slot 1 Firmware Revision: 25.01 00:17:26.787 00:17:26.787 00:17:26.787 Commands Supported and Effects 00:17:26.787 ============================== 00:17:26.787 Admin Commands 00:17:26.787 -------------- 00:17:26.787 Get Log Page (02h): Supported 00:17:26.787 Identify (06h): Supported 00:17:26.787 Abort (08h): Supported 00:17:26.787 Set Features (09h): Supported 00:17:26.787 Get Features (0Ah): Supported 00:17:26.787 Asynchronous Event Request (0Ch): Supported 00:17:26.787 Keep Alive (18h): Supported 00:17:26.787 I/O Commands 00:17:26.787 ------------ 00:17:26.787 Flush (00h): Supported LBA-Change 00:17:26.787 Write (01h): Supported LBA-Change 00:17:26.787 Read (02h): Supported 00:17:26.787 Compare (05h): Supported 00:17:26.787 Write Zeroes (08h): Supported LBA-Change 00:17:26.787 Dataset Management (09h): Supported LBA-Change 00:17:26.787 Copy (19h): Supported LBA-Change 00:17:26.787 00:17:26.787 Error Log 00:17:26.787 ========= 00:17:26.787 00:17:26.787 Arbitration 00:17:26.787 =========== 00:17:26.787 Arbitration Burst: 1 00:17:26.787 00:17:26.787 Power Management 00:17:26.787 ================ 00:17:26.787 Number of Power States: 1 00:17:26.787 Current Power State: Power State #0 00:17:26.787 Power State #0: 00:17:26.787 Max Power: 0.00 W 00:17:26.787 Non-Operational State: Operational 00:17:26.787 Entry Latency: Not Reported 00:17:26.787 Exit Latency: Not Reported 00:17:26.787 Relative Read Throughput: 0 00:17:26.787 Relative Read Latency: 0 00:17:26.787 Relative Write Throughput: 0 00:17:26.787 Relative Write Latency: 0 00:17:26.787 Idle Power: Not Reported 00:17:26.787 Active Power: Not Reported 00:17:26.787 Non-Operational Permissive Mode: Not Supported 00:17:26.787 00:17:26.787 Health Information 00:17:26.787 ================== 00:17:26.787 Critical Warnings: 00:17:26.787 Available Spare Space: OK 00:17:26.787 Temperature: OK 00:17:26.787 Device Reliability: OK 00:17:26.787 Read Only: No 00:17:26.787 Volatile Memory Backup: OK 00:17:26.787 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:26.787 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:26.787 Available Spare: 0% 00:17:26.787 Available Sp[2024-12-06 17:33:18.612745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:26.787 [2024-12-06 17:33:18.620645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:26.787 [2024-12-06 17:33:18.620669] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:26.787 [2024-12-06 17:33:18.620676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.787 [2024-12-06 17:33:18.620681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.787 [2024-12-06 17:33:18.620685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.787 [2024-12-06 17:33:18.620690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.787 [2024-12-06 17:33:18.620727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:26.787 [2024-12-06 17:33:18.620735] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:26.787 [2024-12-06 17:33:18.621730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:26.787 [2024-12-06 17:33:18.621767] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:26.787 [2024-12-06 17:33:18.621772] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:26.787 [2024-12-06 17:33:18.622736] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:26.787 [2024-12-06 17:33:18.622745] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:26.787 [2024-12-06 17:33:18.622784] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:26.787 [2024-12-06 17:33:18.623759] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.787 are Threshold: 0% 00:17:26.787 Life Percentage Used: 0% 00:17:26.787 Data Units Read: 0 00:17:26.787 Data Units Written: 0 00:17:26.787 Host Read Commands: 0 00:17:26.787 Host Write Commands: 0 00:17:26.787 Controller Busy Time: 0 minutes 00:17:26.787 Power Cycles: 0 00:17:26.787 Power On Hours: 0 hours 00:17:26.787 Unsafe Shutdowns: 0 00:17:26.787 Unrecoverable Media Errors: 0 00:17:26.787 Lifetime Error Log Entries: 0 00:17:26.787 Warning Temperature Time: 0 minutes 00:17:26.787 Critical Temperature Time: 0 minutes 00:17:26.787 00:17:26.787 Number of Queues 00:17:26.787 ================ 00:17:26.787 Number of I/O Submission Queues: 127 00:17:26.787 Number of I/O Completion Queues: 127 00:17:26.787 00:17:26.787 Active Namespaces 00:17:26.787 ================= 00:17:26.787 Namespace ID:1 00:17:26.788 Error Recovery Timeout: Unlimited 00:17:26.788 Command Set Identifier: NVM (00h) 00:17:26.788 Deallocate: Supported 00:17:26.788 Deallocated/Unwritten Error: Not Supported 00:17:26.788 Deallocated Read Value: Unknown 00:17:26.788 Deallocate in Write Zeroes: Not Supported 00:17:26.788 Deallocated Guard Field: 0xFFFF 00:17:26.788 Flush: Supported 00:17:26.788 Reservation: Supported 00:17:26.788 Namespace Sharing Capabilities: Multiple Controllers 00:17:26.788 Size (in LBAs): 131072 (0GiB) 00:17:26.788 Capacity (in LBAs): 131072 (0GiB) 00:17:26.788 Utilization (in LBAs): 131072 (0GiB) 00:17:26.788 NGUID: 19597A747D824E189616A1ACD3BBD8A5 00:17:26.788 UUID: 19597a74-7d82-4e18-9616-a1acd3bbd8a5 00:17:26.788 Thin Provisioning: Not Supported 00:17:26.788 Per-NS Atomic Units: Yes 00:17:26.788 Atomic Boundary Size (Normal): 0 00:17:26.788 Atomic Boundary Size (PFail): 0 00:17:26.788 Atomic Boundary Offset: 0 00:17:26.788 Maximum Single Source Range Length: 65535 00:17:26.788 Maximum Copy Length: 65535 00:17:26.788 Maximum Source Range Count: 1 00:17:26.788 NGUID/EUI64 Never Reused: No 00:17:26.788 Namespace Write Protected: No 00:17:26.788 Number of LBA Formats: 1 00:17:26.788 Current LBA Format: LBA Format #00 00:17:26.788 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:26.788 00:17:26.788 17:33:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:26.788 [2024-12-06 17:33:18.815327] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:32.069 Initializing NVMe Controllers 00:17:32.069 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:32.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:32.069 Initialization complete. Launching workers. 00:17:32.069 ======================================================== 00:17:32.069 Latency(us) 00:17:32.069 Device Information : IOPS MiB/s Average min max 00:17:32.069 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39999.80 156.25 3200.11 865.36 8736.12 00:17:32.069 ======================================================== 00:17:32.069 Total : 39999.80 156.25 3200.11 865.36 8736.12 00:17:32.069 00:17:32.069 [2024-12-06 17:33:23.923842] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:32.069 17:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:32.069 [2024-12-06 17:33:24.112442] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:37.349 Initializing NVMe Controllers 00:17:37.349 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:37.349 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:37.349 Initialization complete. Launching workers. 00:17:37.349 ======================================================== 00:17:37.349 Latency(us) 00:17:37.349 Device Information : IOPS MiB/s Average min max 00:17:37.349 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39955.10 156.07 3203.48 859.34 9766.29 00:17:37.349 ======================================================== 00:17:37.349 Total : 39955.10 156.07 3203.48 859.34 9766.29 00:17:37.349 00:17:37.349 [2024-12-06 17:33:29.132848] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:37.349 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:37.349 [2024-12-06 17:33:29.331081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:42.640 [2024-12-06 17:33:34.479721] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:42.640 Initializing NVMe Controllers 00:17:42.640 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.640 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:42.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:42.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:42.640 Initialization complete. Launching workers. 00:17:42.640 Starting thread on core 2 00:17:42.640 Starting thread on core 3 00:17:42.640 Starting thread on core 1 00:17:42.640 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:42.900 [2024-12-06 17:33:34.725981] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.100 [2024-12-06 17:33:38.753784] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.100 Initializing NVMe Controllers 00:17:47.100 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.100 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.100 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:47.100 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:47.100 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:47.100 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:47.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:47.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:47.100 Initialization complete. Launching workers. 00:17:47.100 Starting thread on core 1 with urgent priority queue 00:17:47.100 Starting thread on core 2 with urgent priority queue 00:17:47.100 Starting thread on core 3 with urgent priority queue 00:17:47.100 Starting thread on core 0 with urgent priority queue 00:17:47.100 SPDK bdev Controller (SPDK2 ) core 0: 17436.67 IO/s 5.74 secs/100000 ios 00:17:47.100 SPDK bdev Controller (SPDK2 ) core 1: 8265.00 IO/s 12.10 secs/100000 ios 00:17:47.100 SPDK bdev Controller (SPDK2 ) core 2: 14973.33 IO/s 6.68 secs/100000 ios 00:17:47.100 SPDK bdev Controller (SPDK2 ) core 3: 8029.33 IO/s 12.45 secs/100000 ios 00:17:47.100 ======================================================== 00:17:47.100 00:17:47.100 17:33:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:47.100 [2024-12-06 17:33:38.988556] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.100 Initializing NVMe Controllers 00:17:47.100 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.100 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.100 Namespace ID: 1 size: 0GB 00:17:47.100 Initialization complete. 00:17:47.100 INFO: using host memory buffer for IO 00:17:47.100 Hello world! 00:17:47.100 [2024-12-06 17:33:38.999618] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.100 17:33:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:47.360 [2024-12-06 17:33:39.231545] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:48.302 Initializing NVMe Controllers 00:17:48.302 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.302 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.302 Initialization complete. Launching workers. 00:17:48.302 submit (in ns) avg, min, max = 5182.3, 2817.5, 3999165.0 00:17:48.302 complete (in ns) avg, min, max = 17405.1, 1627.5, 3998355.8 00:17:48.302 00:17:48.302 Submit histogram 00:17:48.302 ================ 00:17:48.302 Range in us Cumulative Count 00:17:48.302 2.813 - 2.827: 0.3323% ( 66) 00:17:48.302 2.827 - 2.840: 1.6766% ( 267) 00:17:48.302 2.840 - 2.853: 4.1285% ( 487) 00:17:48.302 2.853 - 2.867: 8.6446% ( 897) 00:17:48.302 2.867 - 2.880: 13.4679% ( 958) 00:17:48.302 2.880 - 2.893: 18.7292% ( 1045) 00:17:48.302 2.893 - 2.907: 24.1668% ( 1080) 00:17:48.302 2.907 - 2.920: 30.1178% ( 1182) 00:17:48.302 2.920 - 2.933: 36.2904% ( 1226) 00:17:48.302 2.933 - 2.947: 40.7008% ( 876) 00:17:48.302 2.947 - 2.960: 44.8948% ( 833) 00:17:48.302 2.960 - 2.973: 50.1309% ( 1040) 00:17:48.302 2.973 - 2.987: 57.9750% ( 1558) 00:17:48.302 2.987 - 3.000: 67.2188% ( 1836) 00:17:48.302 3.000 - 3.013: 77.2581% ( 1994) 00:17:48.302 3.013 - 3.027: 84.3671% ( 1412) 00:17:48.302 3.027 - 3.040: 90.4441% ( 1207) 00:17:48.302 3.040 - 3.053: 94.8142% ( 868) 00:17:48.302 3.053 - 3.067: 97.6941% ( 572) 00:17:48.302 3.067 - 3.080: 98.7312% ( 206) 00:17:48.302 3.080 - 3.093: 99.1844% ( 90) 00:17:48.302 3.093 - 3.107: 99.3908% ( 41) 00:17:48.302 3.107 - 3.120: 99.4562% ( 13) 00:17:48.302 3.120 - 3.133: 99.4764% ( 4) 00:17:48.302 3.133 - 3.147: 99.5016% ( 5) 00:17:48.302 3.147 - 3.160: 99.5217% ( 4) 00:17:48.302 3.160 - 3.173: 99.5318% ( 2) 00:17:48.302 3.173 - 3.187: 99.5418% ( 2) 00:17:48.302 3.200 - 3.213: 99.5670% ( 5) 00:17:48.302 3.213 - 3.227: 99.5771% ( 2) 00:17:48.302 3.267 - 3.280: 99.5872% ( 2) 00:17:48.302 3.307 - 3.320: 99.5922% ( 1) 00:17:48.302 3.440 - 3.467: 99.6023% ( 2) 00:17:48.302 3.520 - 3.547: 99.6073% ( 1) 00:17:48.302 3.600 - 3.627: 99.6123% ( 1) 00:17:48.302 3.653 - 3.680: 99.6224% ( 2) 00:17:48.302 3.733 - 3.760: 99.6274% ( 1) 00:17:48.302 4.053 - 4.080: 99.6325% ( 1) 00:17:48.302 4.080 - 4.107: 99.6375% ( 1) 00:17:48.302 4.133 - 4.160: 99.6425% ( 1) 00:17:48.302 4.560 - 4.587: 99.6476% ( 1) 00:17:48.302 4.613 - 4.640: 99.6526% ( 1) 00:17:48.302 4.693 - 4.720: 99.6576% ( 1) 00:17:48.302 4.720 - 4.747: 99.6627% ( 1) 00:17:48.302 4.827 - 4.853: 99.6677% ( 1) 00:17:48.302 4.907 - 4.933: 99.6778% ( 2) 00:17:48.302 4.933 - 4.960: 99.6828% ( 1) 00:17:48.302 5.040 - 5.067: 99.6878% ( 1) 00:17:48.302 5.093 - 5.120: 99.6979% ( 2) 00:17:48.302 5.253 - 5.280: 99.7030% ( 1) 00:17:48.302 5.333 - 5.360: 99.7080% ( 1) 00:17:48.302 5.413 - 5.440: 99.7130% ( 1) 00:17:48.302 5.440 - 5.467: 99.7181% ( 1) 00:17:48.302 5.467 - 5.493: 99.7231% ( 1) 00:17:48.302 5.547 - 5.573: 99.7281% ( 1) 00:17:48.302 5.573 - 5.600: 99.7332% ( 1) 00:17:48.302 5.600 - 5.627: 99.7382% ( 1) 00:17:48.302 5.627 - 5.653: 99.7432% ( 1) 00:17:48.302 5.653 - 5.680: 99.7483% ( 1) 00:17:48.302 5.733 - 5.760: 99.7533% ( 1) 00:17:48.302 5.760 - 5.787: 99.7583% ( 1) 00:17:48.302 5.787 - 5.813: 99.7634% ( 1) 00:17:48.302 5.867 - 5.893: 99.7734% ( 2) 00:17:48.302 5.893 - 5.920: 99.7835% ( 2) 00:17:48.302 5.947 - 5.973: 99.8036% ( 4) 00:17:48.302 5.973 - 6.000: 99.8087% ( 1) 00:17:48.302 6.027 - 6.053: 99.8187% ( 2) 00:17:48.302 6.080 - 6.107: 99.8339% ( 3) 00:17:48.302 6.240 - 6.267: 99.8389% ( 1) 00:17:48.302 6.293 - 6.320: 99.8439% ( 1) 00:17:48.302 6.320 - 6.347: 99.8490% ( 1) 00:17:48.302 6.347 - 6.373: 99.8540% ( 1) 00:17:48.302 6.400 - 6.427: 99.8741% ( 4) 00:17:48.302 6.453 - 6.480: 99.8792% ( 1) 00:17:48.302 6.480 - 6.507: 99.8842% ( 1) 00:17:48.302 6.587 - 6.613: 99.8892% ( 1) 00:17:48.302 6.773 - 6.800: 99.8943% ( 1) 00:17:48.302 6.827 - 6.880: 99.8993% ( 1) 00:17:48.302 6.880 - 6.933: 99.9043% ( 1) 00:17:48.302 [2024-12-06 17:33:40.324356] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:48.302 7.147 - 7.200: 99.9094% ( 1) 00:17:48.302 7.307 - 7.360: 99.9144% ( 1) 00:17:48.302 7.360 - 7.413: 99.9194% ( 1) 00:17:48.302 7.467 - 7.520: 99.9245% ( 1) 00:17:48.302 7.947 - 8.000: 99.9295% ( 1) 00:17:48.302 8.160 - 8.213: 99.9396% ( 2) 00:17:48.302 8.427 - 8.480: 99.9446% ( 1) 00:17:48.302 3986.773 - 4014.080: 100.0000% ( 11) 00:17:48.302 00:17:48.302 Complete histogram 00:17:48.302 ================== 00:17:48.302 Range in us Cumulative Count 00:17:48.302 1.627 - 1.633: 0.0151% ( 3) 00:17:48.302 1.633 - 1.640: 0.0352% ( 4) 00:17:48.302 1.640 - 1.647: 0.8156% ( 155) 00:17:48.302 1.647 - 1.653: 0.9465% ( 26) 00:17:48.302 1.653 - 1.660: 1.0372% ( 18) 00:17:48.302 1.660 - 1.667: 1.2335% ( 39) 00:17:48.302 1.667 - 1.673: 1.2688% ( 7) 00:17:48.302 1.673 - 1.680: 11.5749% ( 2047) 00:17:48.302 1.680 - 1.687: 44.9199% ( 6623) 00:17:48.302 1.687 - 1.693: 47.7897% ( 570) 00:17:48.302 1.693 - 1.700: 64.7468% ( 3368) 00:17:48.302 1.700 - 1.707: 74.8666% ( 2010) 00:17:48.302 1.707 - 1.720: 82.3734% ( 1491) 00:17:48.302 1.720 - 1.733: 83.9291% ( 309) 00:17:48.302 1.733 - 1.747: 86.6579% ( 542) 00:17:48.302 1.747 - 1.760: 91.5014% ( 962) 00:17:48.302 1.760 - 1.773: 95.7406% ( 842) 00:17:48.302 1.773 - 1.787: 98.3486% ( 518) 00:17:48.302 1.787 - 1.800: 99.1189% ( 153) 00:17:48.302 1.800 - 1.813: 99.2649% ( 29) 00:17:48.302 1.813 - 1.827: 99.3052% ( 8) 00:17:48.302 1.827 - 1.840: 99.3203% ( 3) 00:17:48.302 1.840 - 1.853: 99.3253% ( 1) 00:17:48.302 1.853 - 1.867: 99.3304% ( 1) 00:17:48.302 1.867 - 1.880: 99.3404% ( 2) 00:17:48.302 1.880 - 1.893: 99.3455% ( 1) 00:17:48.302 1.893 - 1.907: 99.3556% ( 2) 00:17:48.302 1.907 - 1.920: 99.3606% ( 1) 00:17:48.302 1.933 - 1.947: 99.3656% ( 1) 00:17:48.302 1.947 - 1.960: 99.3707% ( 1) 00:17:48.302 1.960 - 1.973: 99.3757% ( 1) 00:17:48.302 1.973 - 1.987: 99.3807% ( 1) 00:17:48.302 2.093 - 2.107: 99.3858% ( 1) 00:17:48.302 2.107 - 2.120: 99.3908% ( 1) 00:17:48.303 2.160 - 2.173: 99.3958% ( 1) 00:17:48.303 3.520 - 3.547: 99.4009% ( 1) 00:17:48.303 3.840 - 3.867: 99.4059% ( 1) 00:17:48.303 3.893 - 3.920: 99.4109% ( 1) 00:17:48.303 4.080 - 4.107: 99.4160% ( 1) 00:17:48.303 4.213 - 4.240: 99.4210% ( 1) 00:17:48.303 4.240 - 4.267: 99.4260% ( 1) 00:17:48.303 4.320 - 4.347: 99.4311% ( 1) 00:17:48.303 4.347 - 4.373: 99.4361% ( 1) 00:17:48.303 4.373 - 4.400: 99.4411% ( 1) 00:17:48.303 4.400 - 4.427: 99.4462% ( 1) 00:17:48.303 4.427 - 4.453: 99.4613% ( 3) 00:17:48.303 4.453 - 4.480: 99.4663% ( 1) 00:17:48.303 4.507 - 4.533: 99.4714% ( 1) 00:17:48.303 4.560 - 4.587: 99.4764% ( 1) 00:17:48.303 4.667 - 4.693: 99.4814% ( 1) 00:17:48.303 4.720 - 4.747: 99.4865% ( 1) 00:17:48.303 4.880 - 4.907: 99.4915% ( 1) 00:17:48.303 4.933 - 4.960: 99.5066% ( 3) 00:17:48.303 4.987 - 5.013: 99.5116% ( 1) 00:17:48.303 5.013 - 5.040: 99.5167% ( 1) 00:17:48.303 5.067 - 5.093: 99.5217% ( 1) 00:17:48.303 5.093 - 5.120: 99.5267% ( 1) 00:17:48.303 5.120 - 5.147: 99.5368% ( 2) 00:17:48.303 5.173 - 5.200: 99.5418% ( 1) 00:17:48.303 5.227 - 5.253: 99.5469% ( 1) 00:17:48.303 5.307 - 5.333: 99.5519% ( 1) 00:17:48.303 5.387 - 5.413: 99.5569% ( 1) 00:17:48.303 5.440 - 5.467: 99.5620% ( 1) 00:17:48.303 5.920 - 5.947: 99.5670% ( 1) 00:17:48.303 5.947 - 5.973: 99.5720% ( 1) 00:17:48.303 6.267 - 6.293: 99.5771% ( 1) 00:17:48.303 7.040 - 7.093: 99.5821% ( 1) 00:17:48.303 10.187 - 10.240: 99.5872% ( 1) 00:17:48.303 11.467 - 11.520: 99.5922% ( 1) 00:17:48.303 33.920 - 34.133: 99.5972% ( 1) 00:17:48.303 41.813 - 42.027: 99.6023% ( 1) 00:17:48.303 130.560 - 131.413: 99.6073% ( 1) 00:17:48.303 3986.773 - 4014.080: 100.0000% ( 78) 00:17:48.303 00:17:48.303 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:48.303 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:48.303 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:48.303 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:48.303 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:48.564 [ 00:17:48.564 { 00:17:48.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:48.565 "subtype": "Discovery", 00:17:48.565 "listen_addresses": [], 00:17:48.565 "allow_any_host": true, 00:17:48.565 "hosts": [] 00:17:48.565 }, 00:17:48.565 { 00:17:48.565 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:48.565 "subtype": "NVMe", 00:17:48.565 "listen_addresses": [ 00:17:48.565 { 00:17:48.565 "trtype": "VFIOUSER", 00:17:48.565 "adrfam": "IPv4", 00:17:48.565 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:48.565 "trsvcid": "0" 00:17:48.565 } 00:17:48.565 ], 00:17:48.565 "allow_any_host": true, 00:17:48.565 "hosts": [], 00:17:48.565 "serial_number": "SPDK1", 00:17:48.565 "model_number": "SPDK bdev Controller", 00:17:48.565 "max_namespaces": 32, 00:17:48.565 "min_cntlid": 1, 00:17:48.565 "max_cntlid": 65519, 00:17:48.565 "namespaces": [ 00:17:48.565 { 00:17:48.565 "nsid": 1, 00:17:48.565 "bdev_name": "Malloc1", 00:17:48.565 "name": "Malloc1", 00:17:48.565 "nguid": "2C29EF9C28F849E9A254AF3F69722AAB", 00:17:48.565 "uuid": "2c29ef9c-28f8-49e9-a254-af3f69722aab" 00:17:48.565 }, 00:17:48.565 { 00:17:48.565 "nsid": 2, 00:17:48.565 "bdev_name": "Malloc3", 00:17:48.565 "name": "Malloc3", 00:17:48.565 "nguid": "436A159B9D1C4DF792A83677A8AB6BCC", 00:17:48.565 "uuid": "436a159b-9d1c-4df7-92a8-3677a8ab6bcc" 00:17:48.565 } 00:17:48.565 ] 00:17:48.565 }, 00:17:48.565 { 00:17:48.565 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:48.565 "subtype": "NVMe", 00:17:48.565 "listen_addresses": [ 00:17:48.565 { 00:17:48.565 "trtype": "VFIOUSER", 00:17:48.565 "adrfam": "IPv4", 00:17:48.565 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:48.565 "trsvcid": "0" 00:17:48.565 } 00:17:48.565 ], 00:17:48.565 "allow_any_host": true, 00:17:48.565 "hosts": [], 00:17:48.565 "serial_number": "SPDK2", 00:17:48.565 "model_number": "SPDK bdev Controller", 00:17:48.565 "max_namespaces": 32, 00:17:48.565 "min_cntlid": 1, 00:17:48.565 "max_cntlid": 65519, 00:17:48.565 "namespaces": [ 00:17:48.565 { 00:17:48.565 "nsid": 1, 00:17:48.565 "bdev_name": "Malloc2", 00:17:48.565 "name": "Malloc2", 00:17:48.565 "nguid": "19597A747D824E189616A1ACD3BBD8A5", 00:17:48.565 "uuid": "19597a74-7d82-4e18-9616-a1acd3bbd8a5" 00:17:48.565 } 00:17:48.565 ] 00:17:48.565 } 00:17:48.565 ] 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1654309 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:48.565 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:48.826 [2024-12-06 17:33:40.698527] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:48.826 Malloc4 00:17:48.826 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:49.086 [2024-12-06 17:33:40.896930] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:49.086 17:33:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:49.086 Asynchronous Event Request test 00:17:49.086 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:49.086 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:49.086 Registering asynchronous event callbacks... 00:17:49.086 Starting namespace attribute notice tests for all controllers... 00:17:49.086 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:49.086 aer_cb - Changed Namespace 00:17:49.086 Cleaning up... 00:17:49.086 [ 00:17:49.086 { 00:17:49.086 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:49.086 "subtype": "Discovery", 00:17:49.086 "listen_addresses": [], 00:17:49.086 "allow_any_host": true, 00:17:49.086 "hosts": [] 00:17:49.086 }, 00:17:49.086 { 00:17:49.086 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:49.086 "subtype": "NVMe", 00:17:49.086 "listen_addresses": [ 00:17:49.086 { 00:17:49.086 "trtype": "VFIOUSER", 00:17:49.086 "adrfam": "IPv4", 00:17:49.086 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:49.086 "trsvcid": "0" 00:17:49.086 } 00:17:49.086 ], 00:17:49.086 "allow_any_host": true, 00:17:49.086 "hosts": [], 00:17:49.086 "serial_number": "SPDK1", 00:17:49.086 "model_number": "SPDK bdev Controller", 00:17:49.086 "max_namespaces": 32, 00:17:49.086 "min_cntlid": 1, 00:17:49.086 "max_cntlid": 65519, 00:17:49.086 "namespaces": [ 00:17:49.086 { 00:17:49.086 "nsid": 1, 00:17:49.086 "bdev_name": "Malloc1", 00:17:49.086 "name": "Malloc1", 00:17:49.086 "nguid": "2C29EF9C28F849E9A254AF3F69722AAB", 00:17:49.086 "uuid": "2c29ef9c-28f8-49e9-a254-af3f69722aab" 00:17:49.086 }, 00:17:49.086 { 00:17:49.086 "nsid": 2, 00:17:49.086 "bdev_name": "Malloc3", 00:17:49.086 "name": "Malloc3", 00:17:49.086 "nguid": "436A159B9D1C4DF792A83677A8AB6BCC", 00:17:49.086 "uuid": "436a159b-9d1c-4df7-92a8-3677a8ab6bcc" 00:17:49.086 } 00:17:49.086 ] 00:17:49.086 }, 00:17:49.086 { 00:17:49.086 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:49.086 "subtype": "NVMe", 00:17:49.086 "listen_addresses": [ 00:17:49.086 { 00:17:49.086 "trtype": "VFIOUSER", 00:17:49.086 "adrfam": "IPv4", 00:17:49.086 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:49.086 "trsvcid": "0" 00:17:49.086 } 00:17:49.086 ], 00:17:49.086 "allow_any_host": true, 00:17:49.086 "hosts": [], 00:17:49.086 "serial_number": "SPDK2", 00:17:49.086 "model_number": "SPDK bdev Controller", 00:17:49.086 "max_namespaces": 32, 00:17:49.086 "min_cntlid": 1, 00:17:49.086 "max_cntlid": 65519, 00:17:49.086 "namespaces": [ 00:17:49.086 { 00:17:49.086 "nsid": 1, 00:17:49.086 "bdev_name": "Malloc2", 00:17:49.086 "name": "Malloc2", 00:17:49.086 "nguid": "19597A747D824E189616A1ACD3BBD8A5", 00:17:49.086 "uuid": "19597a74-7d82-4e18-9616-a1acd3bbd8a5" 00:17:49.086 }, 00:17:49.086 { 00:17:49.086 "nsid": 2, 00:17:49.086 "bdev_name": "Malloc4", 00:17:49.086 "name": "Malloc4", 00:17:49.086 "nguid": "DF08435D26A144D28D97A6F8F6B239B4", 00:17:49.086 "uuid": "df08435d-26a1-44d2-8d97-a6f8f6b239b4" 00:17:49.086 } 00:17:49.086 ] 00:17:49.086 } 00:17:49.086 ] 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1654309 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1653097 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1653097 ']' 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1653097 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:49.086 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.087 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653097 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653097' 00:17:49.347 killing process with pid 1653097 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1653097 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1653097 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1654333 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1654333' 00:17:49.347 Process pid: 1654333 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1654333 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1654333 ']' 00:17:49.347 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.348 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.348 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.348 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.348 17:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:49.348 [2024-12-06 17:33:41.373218] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:49.348 [2024-12-06 17:33:41.374156] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:17:49.348 [2024-12-06 17:33:41.374199] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.608 [2024-12-06 17:33:41.459411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.608 [2024-12-06 17:33:41.488311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.608 [2024-12-06 17:33:41.488343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.608 [2024-12-06 17:33:41.488349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.608 [2024-12-06 17:33:41.488354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.608 [2024-12-06 17:33:41.488358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.608 [2024-12-06 17:33:41.489559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.608 [2024-12-06 17:33:41.489694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.608 [2024-12-06 17:33:41.490020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.608 [2024-12-06 17:33:41.490020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.608 [2024-12-06 17:33:41.541443] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:49.608 [2024-12-06 17:33:41.542346] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:49.608 [2024-12-06 17:33:41.543296] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:49.608 [2024-12-06 17:33:41.544031] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:49.608 [2024-12-06 17:33:41.544047] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:50.180 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.180 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:50.180 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:51.120 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:51.380 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:51.380 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:51.380 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:51.380 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:51.380 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:51.640 Malloc1 00:17:51.640 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:51.901 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:51.901 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:52.161 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:52.161 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:52.161 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:52.421 Malloc2 00:17:52.421 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:52.682 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:52.682 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1654333 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1654333 ']' 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1654333 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654333 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654333' 00:17:52.942 killing process with pid 1654333 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1654333 00:17:52.942 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1654333 00:17:53.201 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:53.201 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:53.201 00:17:53.201 real 0m51.900s 00:17:53.201 user 3m19.111s 00:17:53.201 sys 0m2.674s 00:17:53.201 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.201 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:53.202 ************************************ 00:17:53.202 END TEST nvmf_vfio_user 00:17:53.202 ************************************ 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.202 ************************************ 00:17:53.202 START TEST nvmf_vfio_user_nvme_compliance 00:17:53.202 ************************************ 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:53.202 * Looking for test storage... 00:17:53.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:17:53.202 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:53.462 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.463 --rc genhtml_branch_coverage=1 00:17:53.463 --rc genhtml_function_coverage=1 00:17:53.463 --rc genhtml_legend=1 00:17:53.463 --rc geninfo_all_blocks=1 00:17:53.463 --rc geninfo_unexecuted_blocks=1 00:17:53.463 00:17:53.463 ' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.463 --rc genhtml_branch_coverage=1 00:17:53.463 --rc genhtml_function_coverage=1 00:17:53.463 --rc genhtml_legend=1 00:17:53.463 --rc geninfo_all_blocks=1 00:17:53.463 --rc geninfo_unexecuted_blocks=1 00:17:53.463 00:17:53.463 ' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.463 --rc genhtml_branch_coverage=1 00:17:53.463 --rc genhtml_function_coverage=1 00:17:53.463 --rc genhtml_legend=1 00:17:53.463 --rc geninfo_all_blocks=1 00:17:53.463 --rc geninfo_unexecuted_blocks=1 00:17:53.463 00:17:53.463 ' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.463 --rc genhtml_branch_coverage=1 00:17:53.463 --rc genhtml_function_coverage=1 00:17:53.463 --rc genhtml_legend=1 00:17:53.463 --rc geninfo_all_blocks=1 00:17:53.463 --rc geninfo_unexecuted_blocks=1 00:17:53.463 00:17:53.463 ' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1654460 00:17:53.463 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1654460' 00:17:53.463 Process pid: 1654460 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1654460 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1654460 ']' 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.464 [2024-12-06 17:33:45.447430] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:17:53.464 [2024-12-06 17:33:45.447506] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.723 [2024-12-06 17:33:45.533267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.723 [2024-12-06 17:33:45.567526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.723 [2024-12-06 17:33:45.567558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.723 [2024-12-06 17:33:45.567564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.723 [2024-12-06 17:33:45.567569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.723 [2024-12-06 17:33:45.567574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.723 [2024-12-06 17:33:45.568775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.723 [2024-12-06 17:33:45.569044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.723 [2024-12-06 17:33:45.569045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.292 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.292 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:54.292 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.230 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:55.231 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:55.231 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.231 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.491 malloc0 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.491 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:55.491 00:17:55.491 00:17:55.491 CUnit - A unit testing framework for C - Version 2.1-3 00:17:55.491 http://cunit.sourceforge.net/ 00:17:55.491 00:17:55.491 00:17:55.491 Suite: nvme_compliance 00:17:55.491 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 17:33:47.500358] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.491 [2024-12-06 17:33:47.501659] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:55.491 [2024-12-06 17:33:47.501671] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:55.491 [2024-12-06 17:33:47.501676] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:55.491 [2024-12-06 17:33:47.503381] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.491 passed 00:17:55.753 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 17:33:47.576876] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.753 [2024-12-06 17:33:47.579887] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.753 passed 00:17:55.753 Test: admin_identify_ns ...[2024-12-06 17:33:47.659003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.753 [2024-12-06 17:33:47.719648] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:55.753 [2024-12-06 17:33:47.727644] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:55.753 [2024-12-06 17:33:47.748736] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.753 passed 00:17:56.013 Test: admin_get_features_mandatory_features ...[2024-12-06 17:33:47.822970] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.013 [2024-12-06 17:33:47.825985] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.013 passed 00:17:56.013 Test: admin_get_features_optional_features ...[2024-12-06 17:33:47.902477] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.013 [2024-12-06 17:33:47.905496] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.013 passed 00:17:56.013 Test: admin_set_features_number_of_queues ...[2024-12-06 17:33:47.980203] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.273 [2024-12-06 17:33:48.084718] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.273 passed 00:17:56.273 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 17:33:48.159737] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.273 [2024-12-06 17:33:48.162751] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.273 passed 00:17:56.273 Test: admin_get_log_page_with_lpo ...[2024-12-06 17:33:48.239992] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.273 [2024-12-06 17:33:48.308647] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:56.273 [2024-12-06 17:33:48.321691] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.534 passed 00:17:56.534 Test: fabric_property_get ...[2024-12-06 17:33:48.392852] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.534 [2024-12-06 17:33:48.394052] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:56.534 [2024-12-06 17:33:48.397882] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.534 passed 00:17:56.534 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 17:33:48.472326] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.534 [2024-12-06 17:33:48.473524] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:56.534 [2024-12-06 17:33:48.475345] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.534 passed 00:17:56.534 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 17:33:48.550990] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.794 [2024-12-06 17:33:48.638642] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.794 [2024-12-06 17:33:48.654642] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.794 [2024-12-06 17:33:48.659713] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.794 passed 00:17:56.794 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 17:33:48.730924] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.794 [2024-12-06 17:33:48.732118] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:56.794 [2024-12-06 17:33:48.733935] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.794 passed 00:17:56.794 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 17:33:48.812664] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.054 [2024-12-06 17:33:48.889647] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:57.054 [2024-12-06 17:33:48.913644] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:57.054 [2024-12-06 17:33:48.918715] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.054 passed 00:17:57.054 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 17:33:48.991899] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.054 [2024-12-06 17:33:48.993095] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:57.054 [2024-12-06 17:33:48.993113] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:57.054 [2024-12-06 17:33:48.994919] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.054 passed 00:17:57.054 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 17:33:49.068656] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.314 [2024-12-06 17:33:49.161646] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:57.314 [2024-12-06 17:33:49.169648] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:57.314 [2024-12-06 17:33:49.177647] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:57.314 [2024-12-06 17:33:49.185641] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:57.314 [2024-12-06 17:33:49.214713] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.314 passed 00:17:57.314 Test: admin_create_io_sq_verify_pc ...[2024-12-06 17:33:49.289763] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.314 [2024-12-06 17:33:49.307648] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:57.314 [2024-12-06 17:33:49.324900] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.314 passed 00:17:57.574 Test: admin_create_io_qp_max_qps ...[2024-12-06 17:33:49.399372] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:58.514 [2024-12-06 17:33:50.495645] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:59.084 [2024-12-06 17:33:50.880364] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.084 passed 00:17:59.084 Test: admin_create_io_sq_shared_cq ...[2024-12-06 17:33:50.956190] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.084 [2024-12-06 17:33:51.088642] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:59.084 [2024-12-06 17:33:51.125694] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.344 passed 00:17:59.344 00:17:59.344 Run Summary: Type Total Ran Passed Failed Inactive 00:17:59.344 suites 1 1 n/a 0 0 00:17:59.344 tests 18 18 18 0 0 00:17:59.344 asserts 360 360 360 0 n/a 00:17:59.344 00:17:59.344 Elapsed time = 1.491 seconds 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1654460 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1654460 ']' 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1654460 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654460 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654460' 00:17:59.344 killing process with pid 1654460 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1654460 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1654460 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:59.344 00:17:59.344 real 0m6.203s 00:17:59.344 user 0m17.556s 00:17:59.344 sys 0m0.546s 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:59.344 ************************************ 00:17:59.344 END TEST nvmf_vfio_user_nvme_compliance 00:17:59.344 ************************************ 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.344 17:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.607 ************************************ 00:17:59.607 START TEST nvmf_vfio_user_fuzz 00:17:59.607 ************************************ 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:59.607 * Looking for test storage... 00:17:59.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.607 --rc genhtml_branch_coverage=1 00:17:59.607 --rc genhtml_function_coverage=1 00:17:59.607 --rc genhtml_legend=1 00:17:59.607 --rc geninfo_all_blocks=1 00:17:59.607 --rc geninfo_unexecuted_blocks=1 00:17:59.607 00:17:59.607 ' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.607 --rc genhtml_branch_coverage=1 00:17:59.607 --rc genhtml_function_coverage=1 00:17:59.607 --rc genhtml_legend=1 00:17:59.607 --rc geninfo_all_blocks=1 00:17:59.607 --rc geninfo_unexecuted_blocks=1 00:17:59.607 00:17:59.607 ' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.607 --rc genhtml_branch_coverage=1 00:17:59.607 --rc genhtml_function_coverage=1 00:17:59.607 --rc genhtml_legend=1 00:17:59.607 --rc geninfo_all_blocks=1 00:17:59.607 --rc geninfo_unexecuted_blocks=1 00:17:59.607 00:17:59.607 ' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.607 --rc genhtml_branch_coverage=1 00:17:59.607 --rc genhtml_function_coverage=1 00:17:59.607 --rc genhtml_legend=1 00:17:59.607 --rc geninfo_all_blocks=1 00:17:59.607 --rc geninfo_unexecuted_blocks=1 00:17:59.607 00:17:59.607 ' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.607 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1654628 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1654628' 00:17:59.608 Process pid: 1654628 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1654628 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1654628 ']' 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.608 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.548 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.548 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:00.548 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.488 malloc0 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.488 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:01.749 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:33.869 Fuzzing completed. Shutting down the fuzz application 00:18:33.869 00:18:33.869 Dumping successful admin opcodes: 00:18:33.869 9, 10, 00:18:33.869 Dumping successful io opcodes: 00:18:33.869 0, 00:18:33.869 NS: 0x20000081ef00 I/O qp, Total commands completed: 1325672, total successful commands: 5190, random_seed: 1452272704 00:18:33.869 NS: 0x20000081ef00 admin qp, Total commands completed: 298640, total successful commands: 73, random_seed: 845086400 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1654628 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1654628 ']' 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1654628 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654628 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654628' 00:18:33.869 killing process with pid 1654628 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1654628 00:18:33.869 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1654628 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:33.869 00:18:33.869 real 0m32.785s 00:18:33.869 user 0m38.334s 00:18:33.869 sys 0m23.266s 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.869 ************************************ 00:18:33.869 END TEST nvmf_vfio_user_fuzz 00:18:33.869 ************************************ 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.869 ************************************ 00:18:33.869 START TEST nvmf_auth_target 00:18:33.869 ************************************ 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.869 * Looking for test storage... 00:18:33.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:33.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.869 --rc genhtml_branch_coverage=1 00:18:33.869 --rc genhtml_function_coverage=1 00:18:33.869 --rc genhtml_legend=1 00:18:33.869 --rc geninfo_all_blocks=1 00:18:33.869 --rc geninfo_unexecuted_blocks=1 00:18:33.869 00:18:33.869 ' 00:18:33.869 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:33.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.869 --rc genhtml_branch_coverage=1 00:18:33.869 --rc genhtml_function_coverage=1 00:18:33.870 --rc genhtml_legend=1 00:18:33.870 --rc geninfo_all_blocks=1 00:18:33.870 --rc geninfo_unexecuted_blocks=1 00:18:33.870 00:18:33.870 ' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:33.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.870 --rc genhtml_branch_coverage=1 00:18:33.870 --rc genhtml_function_coverage=1 00:18:33.870 --rc genhtml_legend=1 00:18:33.870 --rc geninfo_all_blocks=1 00:18:33.870 --rc geninfo_unexecuted_blocks=1 00:18:33.870 00:18:33.870 ' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:33.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.870 --rc genhtml_branch_coverage=1 00:18:33.870 --rc genhtml_function_coverage=1 00:18:33.870 --rc genhtml_legend=1 00:18:33.870 --rc geninfo_all_blocks=1 00:18:33.870 --rc geninfo_unexecuted_blocks=1 00:18:33.870 00:18:33.870 ' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:33.870 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:40.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:40.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:40.587 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:40.588 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:40.588 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:40.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:18:40.588 00:18:40.588 --- 10.0.0.2 ping statistics --- 00:18:40.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.588 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:18:40.588 00:18:40.588 --- 10.0.0.1 ping statistics --- 00:18:40.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.588 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1657430 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1657430 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1657430 ']' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.588 17:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1657460 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=475ed2b2117a1e9381d4d94415d093ac4ebf36d0db31898e 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5A7 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 475ed2b2117a1e9381d4d94415d093ac4ebf36d0db31898e 0 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 475ed2b2117a1e9381d4d94415d093ac4ebf36d0db31898e 0 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=475ed2b2117a1e9381d4d94415d093ac4ebf36d0db31898e 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5A7 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5A7 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.5A7 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=53dba72d2f8f75e0dd2098253ca0f19c2fb33a8eb474ec40ee4b03be96b8e46f 00:18:40.871 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hpf 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 53dba72d2f8f75e0dd2098253ca0f19c2fb33a8eb474ec40ee4b03be96b8e46f 3 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 53dba72d2f8f75e0dd2098253ca0f19c2fb33a8eb474ec40ee4b03be96b8e46f 3 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=53dba72d2f8f75e0dd2098253ca0f19c2fb33a8eb474ec40ee4b03be96b8e46f 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hpf 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hpf 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hpf 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8ed66c17b1c05af0ff5984bc3fac38ec 00:18:40.872 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vsG 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8ed66c17b1c05af0ff5984bc3fac38ec 1 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8ed66c17b1c05af0ff5984bc3fac38ec 1 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8ed66c17b1c05af0ff5984bc3fac38ec 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vsG 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vsG 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vsG 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c349fcf659860e9765de9776a885eca855fe7fd618f44a48 00:18:41.135 17:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5DM 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c349fcf659860e9765de9776a885eca855fe7fd618f44a48 2 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c349fcf659860e9765de9776a885eca855fe7fd618f44a48 2 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c349fcf659860e9765de9776a885eca855fe7fd618f44a48 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5DM 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5DM 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.5DM 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1f0bc9a9ffe02a85d879770da0843176aa4070f94de32b5f 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5Ng 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1f0bc9a9ffe02a85d879770da0843176aa4070f94de32b5f 2 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1f0bc9a9ffe02a85d879770da0843176aa4070f94de32b5f 2 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1f0bc9a9ffe02a85d879770da0843176aa4070f94de32b5f 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5Ng 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5Ng 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5Ng 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd331ba76e953c20afdeeced4d017c1a 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qr1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd331ba76e953c20afdeeced4d017c1a 1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd331ba76e953c20afdeeced4d017c1a 1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd331ba76e953c20afdeeced4d017c1a 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qr1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qr1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.qr1 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:41.135 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:41.398 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f9a73d001cb32e28002c33890850ed0636cd90e73f8ee36a570129cc91a3a02 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.w2u 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f9a73d001cb32e28002c33890850ed0636cd90e73f8ee36a570129cc91a3a02 3 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f9a73d001cb32e28002c33890850ed0636cd90e73f8ee36a570129cc91a3a02 3 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f9a73d001cb32e28002c33890850ed0636cd90e73f8ee36a570129cc91a3a02 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.w2u 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.w2u 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.w2u 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1657430 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1657430 ']' 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.399 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1657460 /var/tmp/host.sock 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1657460 ']' 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:41.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5A7 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5A7 00:18:41.667 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5A7 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hpf ]] 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hpf 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hpf 00:18:41.929 17:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hpf 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vsG 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vsG 00:18:42.189 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vsG 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.5DM ]] 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DM 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DM 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DM 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5Ng 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.451 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5Ng 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5Ng 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.qr1 ]] 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qr1 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qr1 00:18:42.712 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qr1 00:18:42.972 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:42.973 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.w2u 00:18:42.973 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.973 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.973 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.973 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.w2u 00:18:42.973 17:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.w2u 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.233 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.492 00:18:43.492 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.492 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.492 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.752 { 00:18:43.752 "cntlid": 1, 00:18:43.752 "qid": 0, 00:18:43.752 "state": "enabled", 00:18:43.752 "thread": "nvmf_tgt_poll_group_000", 00:18:43.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.752 "listen_address": { 00:18:43.752 "trtype": "TCP", 00:18:43.752 "adrfam": "IPv4", 00:18:43.752 "traddr": "10.0.0.2", 00:18:43.752 "trsvcid": "4420" 00:18:43.752 }, 00:18:43.752 "peer_address": { 00:18:43.752 "trtype": "TCP", 00:18:43.752 "adrfam": "IPv4", 00:18:43.752 "traddr": "10.0.0.1", 00:18:43.752 "trsvcid": "59304" 00:18:43.752 }, 00:18:43.752 "auth": { 00:18:43.752 "state": "completed", 00:18:43.752 "digest": "sha256", 00:18:43.752 "dhgroup": "null" 00:18:43.752 } 00:18:43.752 } 00:18:43.752 ]' 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.752 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.013 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.013 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.013 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.013 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:18:44.013 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:18:44.952 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.953 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.213 00:18:45.213 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.213 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.213 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.475 { 00:18:45.475 "cntlid": 3, 00:18:45.475 "qid": 0, 00:18:45.475 "state": "enabled", 00:18:45.475 "thread": "nvmf_tgt_poll_group_000", 00:18:45.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.475 "listen_address": { 00:18:45.475 "trtype": "TCP", 00:18:45.475 "adrfam": "IPv4", 00:18:45.475 "traddr": "10.0.0.2", 00:18:45.475 "trsvcid": "4420" 00:18:45.475 }, 00:18:45.475 "peer_address": { 00:18:45.475 "trtype": "TCP", 00:18:45.475 "adrfam": "IPv4", 00:18:45.475 "traddr": "10.0.0.1", 00:18:45.475 "trsvcid": "59332" 00:18:45.475 }, 00:18:45.475 "auth": { 00:18:45.475 "state": "completed", 00:18:45.475 "digest": "sha256", 00:18:45.475 "dhgroup": "null" 00:18:45.475 } 00:18:45.475 } 00:18:45.475 ]' 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.475 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.737 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:18:45.737 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.309 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.569 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.829 00:18:46.829 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.829 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.829 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.091 { 00:18:47.091 "cntlid": 5, 00:18:47.091 "qid": 0, 00:18:47.091 "state": "enabled", 00:18:47.091 "thread": "nvmf_tgt_poll_group_000", 00:18:47.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.091 "listen_address": { 00:18:47.091 "trtype": "TCP", 00:18:47.091 "adrfam": "IPv4", 00:18:47.091 "traddr": "10.0.0.2", 00:18:47.091 "trsvcid": "4420" 00:18:47.091 }, 00:18:47.091 "peer_address": { 00:18:47.091 "trtype": "TCP", 00:18:47.091 "adrfam": "IPv4", 00:18:47.091 "traddr": "10.0.0.1", 00:18:47.091 "trsvcid": "59360" 00:18:47.091 }, 00:18:47.091 "auth": { 00:18:47.091 "state": "completed", 00:18:47.091 "digest": "sha256", 00:18:47.091 "dhgroup": "null" 00:18:47.091 } 00:18:47.091 } 00:18:47.091 ]' 00:18:47.091 17:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.091 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.352 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:18:47.352 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.924 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.184 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.444 00:18:48.444 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.444 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.444 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.704 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.704 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.704 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.704 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.704 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.704 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.704 { 00:18:48.704 "cntlid": 7, 00:18:48.704 "qid": 0, 00:18:48.704 "state": "enabled", 00:18:48.704 "thread": "nvmf_tgt_poll_group_000", 00:18:48.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.704 "listen_address": { 00:18:48.704 "trtype": "TCP", 00:18:48.705 "adrfam": "IPv4", 00:18:48.705 "traddr": "10.0.0.2", 00:18:48.705 "trsvcid": "4420" 00:18:48.705 }, 00:18:48.705 "peer_address": { 00:18:48.705 "trtype": "TCP", 00:18:48.705 "adrfam": "IPv4", 00:18:48.705 "traddr": "10.0.0.1", 00:18:48.705 "trsvcid": "59386" 00:18:48.705 }, 00:18:48.705 "auth": { 00:18:48.705 "state": "completed", 00:18:48.705 "digest": "sha256", 00:18:48.705 "dhgroup": "null" 00:18:48.705 } 00:18:48.705 } 00:18:48.705 ]' 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.705 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.964 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:18:48.964 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.533 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.792 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.052 00:18:50.052 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.052 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.052 17:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.311 { 00:18:50.311 "cntlid": 9, 00:18:50.311 "qid": 0, 00:18:50.311 "state": "enabled", 00:18:50.311 "thread": "nvmf_tgt_poll_group_000", 00:18:50.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.311 "listen_address": { 00:18:50.311 "trtype": "TCP", 00:18:50.311 "adrfam": "IPv4", 00:18:50.311 "traddr": "10.0.0.2", 00:18:50.311 "trsvcid": "4420" 00:18:50.311 }, 00:18:50.311 "peer_address": { 00:18:50.311 "trtype": "TCP", 00:18:50.311 "adrfam": "IPv4", 00:18:50.311 "traddr": "10.0.0.1", 00:18:50.311 "trsvcid": "59406" 00:18:50.311 }, 00:18:50.311 "auth": { 00:18:50.311 "state": "completed", 00:18:50.311 "digest": "sha256", 00:18:50.311 "dhgroup": "ffdhe2048" 00:18:50.311 } 00:18:50.311 } 00:18:50.311 ]' 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.311 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.570 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:18:50.570 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.138 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.398 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.657 00:18:51.657 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.657 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.657 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.917 { 00:18:51.917 "cntlid": 11, 00:18:51.917 "qid": 0, 00:18:51.917 "state": "enabled", 00:18:51.917 "thread": "nvmf_tgt_poll_group_000", 00:18:51.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.917 "listen_address": { 00:18:51.917 "trtype": "TCP", 00:18:51.917 "adrfam": "IPv4", 00:18:51.917 "traddr": "10.0.0.2", 00:18:51.917 "trsvcid": "4420" 00:18:51.917 }, 00:18:51.917 "peer_address": { 00:18:51.917 "trtype": "TCP", 00:18:51.917 "adrfam": "IPv4", 00:18:51.917 "traddr": "10.0.0.1", 00:18:51.917 "trsvcid": "59420" 00:18:51.917 }, 00:18:51.917 "auth": { 00:18:51.917 "state": "completed", 00:18:51.917 "digest": "sha256", 00:18:51.917 "dhgroup": "ffdhe2048" 00:18:51.917 } 00:18:51.917 } 00:18:51.917 ]' 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.917 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.176 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:18:52.176 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:18:52.746 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.006 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.006 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.006 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.007 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.007 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.007 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.007 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.007 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.266 00:18:53.266 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.266 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.266 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.525 { 00:18:53.525 "cntlid": 13, 00:18:53.525 "qid": 0, 00:18:53.525 "state": "enabled", 00:18:53.525 "thread": "nvmf_tgt_poll_group_000", 00:18:53.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.525 "listen_address": { 00:18:53.525 "trtype": "TCP", 00:18:53.525 "adrfam": "IPv4", 00:18:53.525 "traddr": "10.0.0.2", 00:18:53.525 "trsvcid": "4420" 00:18:53.525 }, 00:18:53.525 "peer_address": { 00:18:53.525 "trtype": "TCP", 00:18:53.525 "adrfam": "IPv4", 00:18:53.525 "traddr": "10.0.0.1", 00:18:53.525 "trsvcid": "55100" 00:18:53.525 }, 00:18:53.525 "auth": { 00:18:53.525 "state": "completed", 00:18:53.525 "digest": "sha256", 00:18:53.525 "dhgroup": "ffdhe2048" 00:18:53.525 } 00:18:53.525 } 00:18:53.525 ]' 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.525 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.785 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.785 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.785 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.785 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:18:53.785 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.724 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.984 00:18:54.984 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.984 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.984 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.244 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.244 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.244 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.244 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.244 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.244 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.244 { 00:18:55.244 "cntlid": 15, 00:18:55.244 "qid": 0, 00:18:55.244 "state": "enabled", 00:18:55.244 "thread": "nvmf_tgt_poll_group_000", 00:18:55.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.244 "listen_address": { 00:18:55.244 "trtype": "TCP", 00:18:55.245 "adrfam": "IPv4", 00:18:55.245 "traddr": "10.0.0.2", 00:18:55.245 "trsvcid": "4420" 00:18:55.245 }, 00:18:55.245 "peer_address": { 00:18:55.245 "trtype": "TCP", 00:18:55.245 "adrfam": "IPv4", 00:18:55.245 "traddr": "10.0.0.1", 00:18:55.245 "trsvcid": "55120" 00:18:55.245 }, 00:18:55.245 "auth": { 00:18:55.245 "state": "completed", 00:18:55.245 "digest": "sha256", 00:18:55.245 "dhgroup": "ffdhe2048" 00:18:55.245 } 00:18:55.245 } 00:18:55.245 ]' 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.245 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.505 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:18:55.505 17:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.076 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.337 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.598 00:18:56.598 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.598 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.598 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.859 { 00:18:56.859 "cntlid": 17, 00:18:56.859 "qid": 0, 00:18:56.859 "state": "enabled", 00:18:56.859 "thread": "nvmf_tgt_poll_group_000", 00:18:56.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.859 "listen_address": { 00:18:56.859 "trtype": "TCP", 00:18:56.859 "adrfam": "IPv4", 00:18:56.859 "traddr": "10.0.0.2", 00:18:56.859 "trsvcid": "4420" 00:18:56.859 }, 00:18:56.859 "peer_address": { 00:18:56.859 "trtype": "TCP", 00:18:56.859 "adrfam": "IPv4", 00:18:56.859 "traddr": "10.0.0.1", 00:18:56.859 "trsvcid": "55154" 00:18:56.859 }, 00:18:56.859 "auth": { 00:18:56.859 "state": "completed", 00:18:56.859 "digest": "sha256", 00:18:56.859 "dhgroup": "ffdhe3072" 00:18:56.859 } 00:18:56.859 } 00:18:56.859 ]' 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.859 17:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.119 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:18:57.119 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.691 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.952 17:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.212 00:18:58.212 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.212 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.212 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.473 { 00:18:58.473 "cntlid": 19, 00:18:58.473 "qid": 0, 00:18:58.473 "state": "enabled", 00:18:58.473 "thread": "nvmf_tgt_poll_group_000", 00:18:58.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.473 "listen_address": { 00:18:58.473 "trtype": "TCP", 00:18:58.473 "adrfam": "IPv4", 00:18:58.473 "traddr": "10.0.0.2", 00:18:58.473 "trsvcid": "4420" 00:18:58.473 }, 00:18:58.473 "peer_address": { 00:18:58.473 "trtype": "TCP", 00:18:58.473 "adrfam": "IPv4", 00:18:58.473 "traddr": "10.0.0.1", 00:18:58.473 "trsvcid": "55174" 00:18:58.473 }, 00:18:58.473 "auth": { 00:18:58.473 "state": "completed", 00:18:58.473 "digest": "sha256", 00:18:58.473 "dhgroup": "ffdhe3072" 00:18:58.473 } 00:18:58.473 } 00:18:58.473 ]' 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.473 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.735 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:18:58.735 17:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.308 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.568 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.830 00:18:59.830 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.830 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.830 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.090 { 00:19:00.090 "cntlid": 21, 00:19:00.090 "qid": 0, 00:19:00.090 "state": "enabled", 00:19:00.090 "thread": "nvmf_tgt_poll_group_000", 00:19:00.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.090 "listen_address": { 00:19:00.090 "trtype": "TCP", 00:19:00.090 "adrfam": "IPv4", 00:19:00.090 "traddr": "10.0.0.2", 00:19:00.090 "trsvcid": "4420" 00:19:00.090 }, 00:19:00.090 "peer_address": { 00:19:00.090 "trtype": "TCP", 00:19:00.090 "adrfam": "IPv4", 00:19:00.090 "traddr": "10.0.0.1", 00:19:00.090 "trsvcid": "55184" 00:19:00.090 }, 00:19:00.090 "auth": { 00:19:00.090 "state": "completed", 00:19:00.090 "digest": "sha256", 00:19:00.090 "dhgroup": "ffdhe3072" 00:19:00.090 } 00:19:00.090 } 00:19:00.090 ]' 00:19:00.090 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.090 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.351 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:00.351 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.922 17:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.183 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.444 00:19:01.444 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.444 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.444 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.705 { 00:19:01.705 "cntlid": 23, 00:19:01.705 "qid": 0, 00:19:01.705 "state": "enabled", 00:19:01.705 "thread": "nvmf_tgt_poll_group_000", 00:19:01.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.705 "listen_address": { 00:19:01.705 "trtype": "TCP", 00:19:01.705 "adrfam": "IPv4", 00:19:01.705 "traddr": "10.0.0.2", 00:19:01.705 "trsvcid": "4420" 00:19:01.705 }, 00:19:01.705 "peer_address": { 00:19:01.705 "trtype": "TCP", 00:19:01.705 "adrfam": "IPv4", 00:19:01.705 "traddr": "10.0.0.1", 00:19:01.705 "trsvcid": "55214" 00:19:01.705 }, 00:19:01.705 "auth": { 00:19:01.705 "state": "completed", 00:19:01.705 "digest": "sha256", 00:19:01.705 "dhgroup": "ffdhe3072" 00:19:01.705 } 00:19:01.705 } 00:19:01.705 ]' 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.705 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.706 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.968 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:01.968 17:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.539 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.799 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.060 00:19:03.060 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.060 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.060 17:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.060 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.060 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.060 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.060 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.320 { 00:19:03.320 "cntlid": 25, 00:19:03.320 "qid": 0, 00:19:03.320 "state": "enabled", 00:19:03.320 "thread": "nvmf_tgt_poll_group_000", 00:19:03.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.320 "listen_address": { 00:19:03.320 "trtype": "TCP", 00:19:03.320 "adrfam": "IPv4", 00:19:03.320 "traddr": "10.0.0.2", 00:19:03.320 "trsvcid": "4420" 00:19:03.320 }, 00:19:03.320 "peer_address": { 00:19:03.320 "trtype": "TCP", 00:19:03.320 "adrfam": "IPv4", 00:19:03.320 "traddr": "10.0.0.1", 00:19:03.320 "trsvcid": "47404" 00:19:03.320 }, 00:19:03.320 "auth": { 00:19:03.320 "state": "completed", 00:19:03.320 "digest": "sha256", 00:19:03.320 "dhgroup": "ffdhe4096" 00:19:03.320 } 00:19:03.320 } 00:19:03.320 ]' 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.320 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.580 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:03.580 17:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.150 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.411 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.672 00:19:04.672 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.672 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.672 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.934 { 00:19:04.934 "cntlid": 27, 00:19:04.934 "qid": 0, 00:19:04.934 "state": "enabled", 00:19:04.934 "thread": "nvmf_tgt_poll_group_000", 00:19:04.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.934 "listen_address": { 00:19:04.934 "trtype": "TCP", 00:19:04.934 "adrfam": "IPv4", 00:19:04.934 "traddr": "10.0.0.2", 00:19:04.934 "trsvcid": "4420" 00:19:04.934 }, 00:19:04.934 "peer_address": { 00:19:04.934 "trtype": "TCP", 00:19:04.934 "adrfam": "IPv4", 00:19:04.934 "traddr": "10.0.0.1", 00:19:04.934 "trsvcid": "47416" 00:19:04.934 }, 00:19:04.934 "auth": { 00:19:04.934 "state": "completed", 00:19:04.934 "digest": "sha256", 00:19:04.934 "dhgroup": "ffdhe4096" 00:19:04.934 } 00:19:04.934 } 00:19:04.934 ]' 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.934 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.196 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:05.196 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.767 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.028 17:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.290 00:19:06.290 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.290 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.290 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.551 { 00:19:06.551 "cntlid": 29, 00:19:06.551 "qid": 0, 00:19:06.551 "state": "enabled", 00:19:06.551 "thread": "nvmf_tgt_poll_group_000", 00:19:06.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.551 "listen_address": { 00:19:06.551 "trtype": "TCP", 00:19:06.551 "adrfam": "IPv4", 00:19:06.551 "traddr": "10.0.0.2", 00:19:06.551 "trsvcid": "4420" 00:19:06.551 }, 00:19:06.551 "peer_address": { 00:19:06.551 "trtype": "TCP", 00:19:06.551 "adrfam": "IPv4", 00:19:06.551 "traddr": "10.0.0.1", 00:19:06.551 "trsvcid": "47444" 00:19:06.551 }, 00:19:06.551 "auth": { 00:19:06.551 "state": "completed", 00:19:06.551 "digest": "sha256", 00:19:06.551 "dhgroup": "ffdhe4096" 00:19:06.551 } 00:19:06.551 } 00:19:06.551 ]' 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.551 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.812 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:06.812 17:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:07.382 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.383 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.644 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.904 00:19:07.904 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.904 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.905 17:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.166 { 00:19:08.166 "cntlid": 31, 00:19:08.166 "qid": 0, 00:19:08.166 "state": "enabled", 00:19:08.166 "thread": "nvmf_tgt_poll_group_000", 00:19:08.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.166 "listen_address": { 00:19:08.166 "trtype": "TCP", 00:19:08.166 "adrfam": "IPv4", 00:19:08.166 "traddr": "10.0.0.2", 00:19:08.166 "trsvcid": "4420" 00:19:08.166 }, 00:19:08.166 "peer_address": { 00:19:08.166 "trtype": "TCP", 00:19:08.166 "adrfam": "IPv4", 00:19:08.166 "traddr": "10.0.0.1", 00:19:08.166 "trsvcid": "47478" 00:19:08.166 }, 00:19:08.166 "auth": { 00:19:08.166 "state": "completed", 00:19:08.166 "digest": "sha256", 00:19:08.166 "dhgroup": "ffdhe4096" 00:19:08.166 } 00:19:08.166 } 00:19:08.166 ]' 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.166 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.426 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:08.426 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:08.999 17:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.258 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.829 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.829 { 00:19:09.829 "cntlid": 33, 00:19:09.829 "qid": 0, 00:19:09.829 "state": "enabled", 00:19:09.829 "thread": "nvmf_tgt_poll_group_000", 00:19:09.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.829 "listen_address": { 00:19:09.829 "trtype": "TCP", 00:19:09.829 "adrfam": "IPv4", 00:19:09.829 "traddr": "10.0.0.2", 00:19:09.829 "trsvcid": "4420" 00:19:09.829 }, 00:19:09.829 "peer_address": { 00:19:09.829 "trtype": "TCP", 00:19:09.829 "adrfam": "IPv4", 00:19:09.829 "traddr": "10.0.0.1", 00:19:09.829 "trsvcid": "47496" 00:19:09.829 }, 00:19:09.829 "auth": { 00:19:09.829 "state": "completed", 00:19:09.829 "digest": "sha256", 00:19:09.829 "dhgroup": "ffdhe6144" 00:19:09.829 } 00:19:09.829 } 00:19:09.829 ]' 00:19:09.829 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.089 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.089 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.089 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.089 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.089 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.089 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.089 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.350 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:10.350 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.919 17:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.179 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.440 00:19:11.440 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.440 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.440 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.700 { 00:19:11.700 "cntlid": 35, 00:19:11.700 "qid": 0, 00:19:11.700 "state": "enabled", 00:19:11.700 "thread": "nvmf_tgt_poll_group_000", 00:19:11.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.700 "listen_address": { 00:19:11.700 "trtype": "TCP", 00:19:11.700 "adrfam": "IPv4", 00:19:11.700 "traddr": "10.0.0.2", 00:19:11.700 "trsvcid": "4420" 00:19:11.700 }, 00:19:11.700 "peer_address": { 00:19:11.700 "trtype": "TCP", 00:19:11.700 "adrfam": "IPv4", 00:19:11.700 "traddr": "10.0.0.1", 00:19:11.700 "trsvcid": "47542" 00:19:11.700 }, 00:19:11.700 "auth": { 00:19:11.700 "state": "completed", 00:19:11.700 "digest": "sha256", 00:19:11.700 "dhgroup": "ffdhe6144" 00:19:11.700 } 00:19:11.700 } 00:19:11.700 ]' 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.700 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.959 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:11.960 17:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.529 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.788 17:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.047 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.306 { 00:19:13.306 "cntlid": 37, 00:19:13.306 "qid": 0, 00:19:13.306 "state": "enabled", 00:19:13.306 "thread": "nvmf_tgt_poll_group_000", 00:19:13.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.306 "listen_address": { 00:19:13.306 "trtype": "TCP", 00:19:13.306 "adrfam": "IPv4", 00:19:13.306 "traddr": "10.0.0.2", 00:19:13.306 "trsvcid": "4420" 00:19:13.306 }, 00:19:13.306 "peer_address": { 00:19:13.306 "trtype": "TCP", 00:19:13.306 "adrfam": "IPv4", 00:19:13.306 "traddr": "10.0.0.1", 00:19:13.306 "trsvcid": "45798" 00:19:13.306 }, 00:19:13.306 "auth": { 00:19:13.306 "state": "completed", 00:19:13.306 "digest": "sha256", 00:19:13.306 "dhgroup": "ffdhe6144" 00:19:13.306 } 00:19:13.306 } 00:19:13.306 ]' 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.306 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.565 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.565 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.565 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.566 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.566 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.566 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:13.566 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.505 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.765 00:19:15.028 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.028 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.028 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.028 { 00:19:15.028 "cntlid": 39, 00:19:15.028 "qid": 0, 00:19:15.028 "state": "enabled", 00:19:15.028 "thread": "nvmf_tgt_poll_group_000", 00:19:15.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:15.028 "listen_address": { 00:19:15.028 "trtype": "TCP", 00:19:15.028 "adrfam": "IPv4", 00:19:15.028 "traddr": "10.0.0.2", 00:19:15.028 "trsvcid": "4420" 00:19:15.028 }, 00:19:15.028 "peer_address": { 00:19:15.028 "trtype": "TCP", 00:19:15.028 "adrfam": "IPv4", 00:19:15.028 "traddr": "10.0.0.1", 00:19:15.028 "trsvcid": "45828" 00:19:15.028 }, 00:19:15.028 "auth": { 00:19:15.028 "state": "completed", 00:19:15.028 "digest": "sha256", 00:19:15.028 "dhgroup": "ffdhe6144" 00:19:15.028 } 00:19:15.028 } 00:19:15.028 ]' 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.028 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:15.289 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:16.230 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.230 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.230 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.230 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.230 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.802 00:19:16.802 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.802 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.802 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.802 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.802 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.064 { 00:19:17.064 "cntlid": 41, 00:19:17.064 "qid": 0, 00:19:17.064 "state": "enabled", 00:19:17.064 "thread": "nvmf_tgt_poll_group_000", 00:19:17.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.064 "listen_address": { 00:19:17.064 "trtype": "TCP", 00:19:17.064 "adrfam": "IPv4", 00:19:17.064 "traddr": "10.0.0.2", 00:19:17.064 "trsvcid": "4420" 00:19:17.064 }, 00:19:17.064 "peer_address": { 00:19:17.064 "trtype": "TCP", 00:19:17.064 "adrfam": "IPv4", 00:19:17.064 "traddr": "10.0.0.1", 00:19:17.064 "trsvcid": "45852" 00:19:17.064 }, 00:19:17.064 "auth": { 00:19:17.064 "state": "completed", 00:19:17.064 "digest": "sha256", 00:19:17.064 "dhgroup": "ffdhe8192" 00:19:17.064 } 00:19:17.064 } 00:19:17.064 ]' 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.064 17:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.064 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.064 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.064 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.326 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:17.326 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:17.897 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.898 17:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.277 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.586 00:19:18.586 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.586 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.586 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.877 { 00:19:18.877 "cntlid": 43, 00:19:18.877 "qid": 0, 00:19:18.877 "state": "enabled", 00:19:18.877 "thread": "nvmf_tgt_poll_group_000", 00:19:18.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.877 "listen_address": { 00:19:18.877 "trtype": "TCP", 00:19:18.877 "adrfam": "IPv4", 00:19:18.877 "traddr": "10.0.0.2", 00:19:18.877 "trsvcid": "4420" 00:19:18.877 }, 00:19:18.877 "peer_address": { 00:19:18.877 "trtype": "TCP", 00:19:18.877 "adrfam": "IPv4", 00:19:18.877 "traddr": "10.0.0.1", 00:19:18.877 "trsvcid": "45882" 00:19:18.877 }, 00:19:18.877 "auth": { 00:19:18.877 "state": "completed", 00:19:18.877 "digest": "sha256", 00:19:18.877 "dhgroup": "ffdhe8192" 00:19:18.877 } 00:19:18.877 } 00:19:18.877 ]' 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.877 17:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.149 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:19.149 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.720 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.981 17:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.551 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.551 { 00:19:20.551 "cntlid": 45, 00:19:20.551 "qid": 0, 00:19:20.551 "state": "enabled", 00:19:20.551 "thread": "nvmf_tgt_poll_group_000", 00:19:20.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.551 "listen_address": { 00:19:20.551 "trtype": "TCP", 00:19:20.551 "adrfam": "IPv4", 00:19:20.551 "traddr": "10.0.0.2", 00:19:20.551 "trsvcid": "4420" 00:19:20.551 }, 00:19:20.551 "peer_address": { 00:19:20.551 "trtype": "TCP", 00:19:20.551 "adrfam": "IPv4", 00:19:20.551 "traddr": "10.0.0.1", 00:19:20.551 "trsvcid": "45912" 00:19:20.551 }, 00:19:20.551 "auth": { 00:19:20.551 "state": "completed", 00:19:20.551 "digest": "sha256", 00:19:20.551 "dhgroup": "ffdhe8192" 00:19:20.551 } 00:19:20.551 } 00:19:20.551 ]' 00:19:20.551 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.811 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.071 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:21.071 17:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.641 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.902 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.474 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.474 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.474 { 00:19:22.474 "cntlid": 47, 00:19:22.474 "qid": 0, 00:19:22.474 "state": "enabled", 00:19:22.474 "thread": "nvmf_tgt_poll_group_000", 00:19:22.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:22.474 "listen_address": { 00:19:22.474 "trtype": "TCP", 00:19:22.474 "adrfam": "IPv4", 00:19:22.474 "traddr": "10.0.0.2", 00:19:22.474 "trsvcid": "4420" 00:19:22.474 }, 00:19:22.474 "peer_address": { 00:19:22.474 "trtype": "TCP", 00:19:22.474 "adrfam": "IPv4", 00:19:22.474 "traddr": "10.0.0.1", 00:19:22.474 "trsvcid": "49086" 00:19:22.474 }, 00:19:22.474 "auth": { 00:19:22.475 "state": "completed", 00:19:22.475 "digest": "sha256", 00:19:22.475 "dhgroup": "ffdhe8192" 00:19:22.475 } 00:19:22.475 } 00:19:22.475 ]' 00:19:22.475 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.475 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.475 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:22.735 17:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.675 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.935 00:19:23.935 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.935 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.935 17:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.197 { 00:19:24.197 "cntlid": 49, 00:19:24.197 "qid": 0, 00:19:24.197 "state": "enabled", 00:19:24.197 "thread": "nvmf_tgt_poll_group_000", 00:19:24.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.197 "listen_address": { 00:19:24.197 "trtype": "TCP", 00:19:24.197 "adrfam": "IPv4", 00:19:24.197 "traddr": "10.0.0.2", 00:19:24.197 "trsvcid": "4420" 00:19:24.197 }, 00:19:24.197 "peer_address": { 00:19:24.197 "trtype": "TCP", 00:19:24.197 "adrfam": "IPv4", 00:19:24.197 "traddr": "10.0.0.1", 00:19:24.197 "trsvcid": "49118" 00:19:24.197 }, 00:19:24.197 "auth": { 00:19:24.197 "state": "completed", 00:19:24.197 "digest": "sha384", 00:19:24.197 "dhgroup": "null" 00:19:24.197 } 00:19:24.197 } 00:19:24.197 ]' 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.197 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.458 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:24.458 17:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:25.027 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.287 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.288 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.288 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.288 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.288 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.288 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.288 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.548 00:19:25.548 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.548 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.548 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.808 { 00:19:25.808 "cntlid": 51, 00:19:25.808 "qid": 0, 00:19:25.808 "state": "enabled", 00:19:25.808 "thread": "nvmf_tgt_poll_group_000", 00:19:25.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:25.808 "listen_address": { 00:19:25.808 "trtype": "TCP", 00:19:25.808 "adrfam": "IPv4", 00:19:25.808 "traddr": "10.0.0.2", 00:19:25.808 "trsvcid": "4420" 00:19:25.808 }, 00:19:25.808 "peer_address": { 00:19:25.808 "trtype": "TCP", 00:19:25.808 "adrfam": "IPv4", 00:19:25.808 "traddr": "10.0.0.1", 00:19:25.808 "trsvcid": "49136" 00:19:25.808 }, 00:19:25.808 "auth": { 00:19:25.808 "state": "completed", 00:19:25.808 "digest": "sha384", 00:19:25.808 "dhgroup": "null" 00:19:25.808 } 00:19:25.808 } 00:19:25.808 ]' 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.808 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.068 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:26.068 17:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:26.638 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:26.639 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.899 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.160 00:19:27.160 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.160 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.160 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.421 { 00:19:27.421 "cntlid": 53, 00:19:27.421 "qid": 0, 00:19:27.421 "state": "enabled", 00:19:27.421 "thread": "nvmf_tgt_poll_group_000", 00:19:27.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:27.421 "listen_address": { 00:19:27.421 "trtype": "TCP", 00:19:27.421 "adrfam": "IPv4", 00:19:27.421 "traddr": "10.0.0.2", 00:19:27.421 "trsvcid": "4420" 00:19:27.421 }, 00:19:27.421 "peer_address": { 00:19:27.421 "trtype": "TCP", 00:19:27.421 "adrfam": "IPv4", 00:19:27.421 "traddr": "10.0.0.1", 00:19:27.421 "trsvcid": "49154" 00:19:27.421 }, 00:19:27.421 "auth": { 00:19:27.421 "state": "completed", 00:19:27.421 "digest": "sha384", 00:19:27.421 "dhgroup": "null" 00:19:27.421 } 00:19:27.421 } 00:19:27.421 ]' 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.421 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.682 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:27.682 17:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.254 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.515 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.795 00:19:28.795 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.795 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.795 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.055 { 00:19:29.055 "cntlid": 55, 00:19:29.055 "qid": 0, 00:19:29.055 "state": "enabled", 00:19:29.055 "thread": "nvmf_tgt_poll_group_000", 00:19:29.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.055 "listen_address": { 00:19:29.055 "trtype": "TCP", 00:19:29.055 "adrfam": "IPv4", 00:19:29.055 "traddr": "10.0.0.2", 00:19:29.055 "trsvcid": "4420" 00:19:29.055 }, 00:19:29.055 "peer_address": { 00:19:29.055 "trtype": "TCP", 00:19:29.055 "adrfam": "IPv4", 00:19:29.055 "traddr": "10.0.0.1", 00:19:29.055 "trsvcid": "49168" 00:19:29.055 }, 00:19:29.055 "auth": { 00:19:29.055 "state": "completed", 00:19:29.055 "digest": "sha384", 00:19:29.055 "dhgroup": "null" 00:19:29.055 } 00:19:29.055 } 00:19:29.055 ]' 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.055 17:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.055 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.055 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.055 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.315 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:29.315 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:29.886 17:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.146 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.407 00:19:30.407 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.407 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.407 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.667 { 00:19:30.667 "cntlid": 57, 00:19:30.667 "qid": 0, 00:19:30.667 "state": "enabled", 00:19:30.667 "thread": "nvmf_tgt_poll_group_000", 00:19:30.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.667 "listen_address": { 00:19:30.667 "trtype": "TCP", 00:19:30.667 "adrfam": "IPv4", 00:19:30.667 "traddr": "10.0.0.2", 00:19:30.667 "trsvcid": "4420" 00:19:30.667 }, 00:19:30.667 "peer_address": { 00:19:30.667 "trtype": "TCP", 00:19:30.667 "adrfam": "IPv4", 00:19:30.667 "traddr": "10.0.0.1", 00:19:30.667 "trsvcid": "49202" 00:19:30.667 }, 00:19:30.667 "auth": { 00:19:30.667 "state": "completed", 00:19:30.667 "digest": "sha384", 00:19:30.667 "dhgroup": "ffdhe2048" 00:19:30.667 } 00:19:30.667 } 00:19:30.667 ]' 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.667 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.927 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:30.927 17:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.498 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.757 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:31.757 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.757 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.757 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.757 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.758 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.017 00:19:32.017 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.017 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.017 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.277 { 00:19:32.277 "cntlid": 59, 00:19:32.277 "qid": 0, 00:19:32.277 "state": "enabled", 00:19:32.277 "thread": "nvmf_tgt_poll_group_000", 00:19:32.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:32.277 "listen_address": { 00:19:32.277 "trtype": "TCP", 00:19:32.277 "adrfam": "IPv4", 00:19:32.277 "traddr": "10.0.0.2", 00:19:32.277 "trsvcid": "4420" 00:19:32.277 }, 00:19:32.277 "peer_address": { 00:19:32.277 "trtype": "TCP", 00:19:32.277 "adrfam": "IPv4", 00:19:32.277 "traddr": "10.0.0.1", 00:19:32.277 "trsvcid": "46980" 00:19:32.277 }, 00:19:32.277 "auth": { 00:19:32.277 "state": "completed", 00:19:32.277 "digest": "sha384", 00:19:32.277 "dhgroup": "ffdhe2048" 00:19:32.277 } 00:19:32.277 } 00:19:32.277 ]' 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.277 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.537 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:32.537 17:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.121 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.386 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.701 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.701 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.960 { 00:19:33.960 "cntlid": 61, 00:19:33.960 "qid": 0, 00:19:33.960 "state": "enabled", 00:19:33.960 "thread": "nvmf_tgt_poll_group_000", 00:19:33.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.960 "listen_address": { 00:19:33.960 "trtype": "TCP", 00:19:33.960 "adrfam": "IPv4", 00:19:33.960 "traddr": "10.0.0.2", 00:19:33.960 "trsvcid": "4420" 00:19:33.960 }, 00:19:33.960 "peer_address": { 00:19:33.960 "trtype": "TCP", 00:19:33.960 "adrfam": "IPv4", 00:19:33.960 "traddr": "10.0.0.1", 00:19:33.960 "trsvcid": "46994" 00:19:33.960 }, 00:19:33.960 "auth": { 00:19:33.960 "state": "completed", 00:19:33.960 "digest": "sha384", 00:19:33.960 "dhgroup": "ffdhe2048" 00:19:33.960 } 00:19:33.960 } 00:19:33.960 ]' 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.960 17:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.219 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:34.219 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.789 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.049 17:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.309 00:19:35.309 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.309 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.309 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.570 { 00:19:35.570 "cntlid": 63, 00:19:35.570 "qid": 0, 00:19:35.570 "state": "enabled", 00:19:35.570 "thread": "nvmf_tgt_poll_group_000", 00:19:35.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:35.570 "listen_address": { 00:19:35.570 "trtype": "TCP", 00:19:35.570 "adrfam": "IPv4", 00:19:35.570 "traddr": "10.0.0.2", 00:19:35.570 "trsvcid": "4420" 00:19:35.570 }, 00:19:35.570 "peer_address": { 00:19:35.570 "trtype": "TCP", 00:19:35.570 "adrfam": "IPv4", 00:19:35.570 "traddr": "10.0.0.1", 00:19:35.570 "trsvcid": "47026" 00:19:35.570 }, 00:19:35.570 "auth": { 00:19:35.570 "state": "completed", 00:19:35.570 "digest": "sha384", 00:19:35.570 "dhgroup": "ffdhe2048" 00:19:35.570 } 00:19:35.570 } 00:19:35.570 ]' 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.570 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.831 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:35.831 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:36.402 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.402 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.402 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.402 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.403 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.403 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.403 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.403 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.403 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.664 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.664 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.925 { 00:19:36.925 "cntlid": 65, 00:19:36.925 "qid": 0, 00:19:36.925 "state": "enabled", 00:19:36.925 "thread": "nvmf_tgt_poll_group_000", 00:19:36.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.925 "listen_address": { 00:19:36.925 "trtype": "TCP", 00:19:36.925 "adrfam": "IPv4", 00:19:36.925 "traddr": "10.0.0.2", 00:19:36.925 "trsvcid": "4420" 00:19:36.925 }, 00:19:36.925 "peer_address": { 00:19:36.925 "trtype": "TCP", 00:19:36.925 "adrfam": "IPv4", 00:19:36.925 "traddr": "10.0.0.1", 00:19:36.925 "trsvcid": "47062" 00:19:36.925 }, 00:19:36.925 "auth": { 00:19:36.925 "state": "completed", 00:19:36.925 "digest": "sha384", 00:19:36.925 "dhgroup": "ffdhe3072" 00:19:36.925 } 00:19:36.925 } 00:19:36.925 ]' 00:19:36.925 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.185 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.185 17:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.185 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.185 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.185 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.185 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.185 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.445 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:37.445 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.016 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.016 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.277 00:19:38.277 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.277 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.277 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.538 { 00:19:38.538 "cntlid": 67, 00:19:38.538 "qid": 0, 00:19:38.538 "state": "enabled", 00:19:38.538 "thread": "nvmf_tgt_poll_group_000", 00:19:38.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.538 "listen_address": { 00:19:38.538 "trtype": "TCP", 00:19:38.538 "adrfam": "IPv4", 00:19:38.538 "traddr": "10.0.0.2", 00:19:38.538 "trsvcid": "4420" 00:19:38.538 }, 00:19:38.538 "peer_address": { 00:19:38.538 "trtype": "TCP", 00:19:38.538 "adrfam": "IPv4", 00:19:38.538 "traddr": "10.0.0.1", 00:19:38.538 "trsvcid": "47088" 00:19:38.538 }, 00:19:38.538 "auth": { 00:19:38.538 "state": "completed", 00:19:38.538 "digest": "sha384", 00:19:38.538 "dhgroup": "ffdhe3072" 00:19:38.538 } 00:19:38.538 } 00:19:38.538 ]' 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.538 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.798 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.798 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.798 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.798 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.798 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.798 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:38.799 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.739 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.000 00:19:40.000 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.000 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.000 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.259 { 00:19:40.259 "cntlid": 69, 00:19:40.259 "qid": 0, 00:19:40.259 "state": "enabled", 00:19:40.259 "thread": "nvmf_tgt_poll_group_000", 00:19:40.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.259 "listen_address": { 00:19:40.259 "trtype": "TCP", 00:19:40.259 "adrfam": "IPv4", 00:19:40.259 "traddr": "10.0.0.2", 00:19:40.259 "trsvcid": "4420" 00:19:40.259 }, 00:19:40.259 "peer_address": { 00:19:40.259 "trtype": "TCP", 00:19:40.259 "adrfam": "IPv4", 00:19:40.259 "traddr": "10.0.0.1", 00:19:40.259 "trsvcid": "47118" 00:19:40.259 }, 00:19:40.259 "auth": { 00:19:40.259 "state": "completed", 00:19:40.259 "digest": "sha384", 00:19:40.259 "dhgroup": "ffdhe3072" 00:19:40.259 } 00:19:40.259 } 00:19:40.259 ]' 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.259 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.260 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.260 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.260 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.260 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.260 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.520 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:40.520 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:41.088 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.348 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.609 00:19:41.609 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.609 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.609 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.868 { 00:19:41.868 "cntlid": 71, 00:19:41.868 "qid": 0, 00:19:41.868 "state": "enabled", 00:19:41.868 "thread": "nvmf_tgt_poll_group_000", 00:19:41.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.868 "listen_address": { 00:19:41.868 "trtype": "TCP", 00:19:41.868 "adrfam": "IPv4", 00:19:41.868 "traddr": "10.0.0.2", 00:19:41.868 "trsvcid": "4420" 00:19:41.868 }, 00:19:41.868 "peer_address": { 00:19:41.868 "trtype": "TCP", 00:19:41.868 "adrfam": "IPv4", 00:19:41.868 "traddr": "10.0.0.1", 00:19:41.868 "trsvcid": "47150" 00:19:41.868 }, 00:19:41.868 "auth": { 00:19:41.868 "state": "completed", 00:19:41.868 "digest": "sha384", 00:19:41.868 "dhgroup": "ffdhe3072" 00:19:41.868 } 00:19:41.868 } 00:19:41.868 ]' 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.868 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.128 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:42.128 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.699 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.959 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.241 00:19:43.241 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.241 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.241 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.502 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.502 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.502 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.502 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.502 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.502 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.502 { 00:19:43.502 "cntlid": 73, 00:19:43.502 "qid": 0, 00:19:43.502 "state": "enabled", 00:19:43.502 "thread": "nvmf_tgt_poll_group_000", 00:19:43.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.502 "listen_address": { 00:19:43.502 "trtype": "TCP", 00:19:43.502 "adrfam": "IPv4", 00:19:43.502 "traddr": "10.0.0.2", 00:19:43.502 "trsvcid": "4420" 00:19:43.502 }, 00:19:43.502 "peer_address": { 00:19:43.502 "trtype": "TCP", 00:19:43.502 "adrfam": "IPv4", 00:19:43.502 "traddr": "10.0.0.1", 00:19:43.502 "trsvcid": "34454" 00:19:43.502 }, 00:19:43.502 "auth": { 00:19:43.502 "state": "completed", 00:19:43.502 "digest": "sha384", 00:19:43.502 "dhgroup": "ffdhe4096" 00:19:43.502 } 00:19:43.502 } 00:19:43.502 ]' 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.503 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.762 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:43.762 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.333 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.593 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.853 00:19:44.853 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.853 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.853 17:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.113 { 00:19:45.113 "cntlid": 75, 00:19:45.113 "qid": 0, 00:19:45.113 "state": "enabled", 00:19:45.113 "thread": "nvmf_tgt_poll_group_000", 00:19:45.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.113 "listen_address": { 00:19:45.113 "trtype": "TCP", 00:19:45.113 "adrfam": "IPv4", 00:19:45.113 "traddr": "10.0.0.2", 00:19:45.113 "trsvcid": "4420" 00:19:45.113 }, 00:19:45.113 "peer_address": { 00:19:45.113 "trtype": "TCP", 00:19:45.113 "adrfam": "IPv4", 00:19:45.113 "traddr": "10.0.0.1", 00:19:45.113 "trsvcid": "34484" 00:19:45.113 }, 00:19:45.113 "auth": { 00:19:45.113 "state": "completed", 00:19:45.113 "digest": "sha384", 00:19:45.113 "dhgroup": "ffdhe4096" 00:19:45.113 } 00:19:45.113 } 00:19:45.113 ]' 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.113 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.373 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:45.373 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:45.943 17:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.203 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.463 00:19:46.463 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.463 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.463 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.723 { 00:19:46.723 "cntlid": 77, 00:19:46.723 "qid": 0, 00:19:46.723 "state": "enabled", 00:19:46.723 "thread": "nvmf_tgt_poll_group_000", 00:19:46.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.723 "listen_address": { 00:19:46.723 "trtype": "TCP", 00:19:46.723 "adrfam": "IPv4", 00:19:46.723 "traddr": "10.0.0.2", 00:19:46.723 "trsvcid": "4420" 00:19:46.723 }, 00:19:46.723 "peer_address": { 00:19:46.723 "trtype": "TCP", 00:19:46.723 "adrfam": "IPv4", 00:19:46.723 "traddr": "10.0.0.1", 00:19:46.723 "trsvcid": "34516" 00:19:46.723 }, 00:19:46.723 "auth": { 00:19:46.723 "state": "completed", 00:19:46.723 "digest": "sha384", 00:19:46.723 "dhgroup": "ffdhe4096" 00:19:46.723 } 00:19:46.723 } 00:19:46.723 ]' 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.723 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.983 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:46.983 17:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:47.555 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.815 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.075 00:19:48.075 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.075 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.075 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.335 { 00:19:48.335 "cntlid": 79, 00:19:48.335 "qid": 0, 00:19:48.335 "state": "enabled", 00:19:48.335 "thread": "nvmf_tgt_poll_group_000", 00:19:48.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.335 "listen_address": { 00:19:48.335 "trtype": "TCP", 00:19:48.335 "adrfam": "IPv4", 00:19:48.335 "traddr": "10.0.0.2", 00:19:48.335 "trsvcid": "4420" 00:19:48.335 }, 00:19:48.335 "peer_address": { 00:19:48.335 "trtype": "TCP", 00:19:48.335 "adrfam": "IPv4", 00:19:48.335 "traddr": "10.0.0.1", 00:19:48.335 "trsvcid": "34554" 00:19:48.335 }, 00:19:48.335 "auth": { 00:19:48.335 "state": "completed", 00:19:48.335 "digest": "sha384", 00:19:48.335 "dhgroup": "ffdhe4096" 00:19:48.335 } 00:19:48.335 } 00:19:48.335 ]' 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.335 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.594 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.595 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.595 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.595 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:48.595 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:49.555 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.556 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.814 00:19:49.814 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.814 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.814 17:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.072 { 00:19:50.072 "cntlid": 81, 00:19:50.072 "qid": 0, 00:19:50.072 "state": "enabled", 00:19:50.072 "thread": "nvmf_tgt_poll_group_000", 00:19:50.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.072 "listen_address": { 00:19:50.072 "trtype": "TCP", 00:19:50.072 "adrfam": "IPv4", 00:19:50.072 "traddr": "10.0.0.2", 00:19:50.072 "trsvcid": "4420" 00:19:50.072 }, 00:19:50.072 "peer_address": { 00:19:50.072 "trtype": "TCP", 00:19:50.072 "adrfam": "IPv4", 00:19:50.072 "traddr": "10.0.0.1", 00:19:50.072 "trsvcid": "34580" 00:19:50.072 }, 00:19:50.072 "auth": { 00:19:50.072 "state": "completed", 00:19:50.072 "digest": "sha384", 00:19:50.072 "dhgroup": "ffdhe6144" 00:19:50.072 } 00:19:50.072 } 00:19:50.072 ]' 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.072 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.330 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.330 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.330 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.330 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:50.330 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:51.264 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.264 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.264 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.264 17:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.264 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.265 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.578 00:19:51.578 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.578 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.578 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.836 { 00:19:51.836 "cntlid": 83, 00:19:51.836 "qid": 0, 00:19:51.836 "state": "enabled", 00:19:51.836 "thread": "nvmf_tgt_poll_group_000", 00:19:51.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:51.836 "listen_address": { 00:19:51.836 "trtype": "TCP", 00:19:51.836 "adrfam": "IPv4", 00:19:51.836 "traddr": "10.0.0.2", 00:19:51.836 "trsvcid": "4420" 00:19:51.836 }, 00:19:51.836 "peer_address": { 00:19:51.836 "trtype": "TCP", 00:19:51.836 "adrfam": "IPv4", 00:19:51.836 "traddr": "10.0.0.1", 00:19:51.836 "trsvcid": "34600" 00:19:51.836 }, 00:19:51.836 "auth": { 00:19:51.836 "state": "completed", 00:19:51.836 "digest": "sha384", 00:19:51.836 "dhgroup": "ffdhe6144" 00:19:51.836 } 00:19:51.836 } 00:19:51.836 ]' 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.836 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.837 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.837 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.837 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.837 17:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.096 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:52.096 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.665 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.925 17:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.190 00:19:53.449 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.450 { 00:19:53.450 "cntlid": 85, 00:19:53.450 "qid": 0, 00:19:53.450 "state": "enabled", 00:19:53.450 "thread": "nvmf_tgt_poll_group_000", 00:19:53.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.450 "listen_address": { 00:19:53.450 "trtype": "TCP", 00:19:53.450 "adrfam": "IPv4", 00:19:53.450 "traddr": "10.0.0.2", 00:19:53.450 "trsvcid": "4420" 00:19:53.450 }, 00:19:53.450 "peer_address": { 00:19:53.450 "trtype": "TCP", 00:19:53.450 "adrfam": "IPv4", 00:19:53.450 "traddr": "10.0.0.1", 00:19:53.450 "trsvcid": "56728" 00:19:53.450 }, 00:19:53.450 "auth": { 00:19:53.450 "state": "completed", 00:19:53.450 "digest": "sha384", 00:19:53.450 "dhgroup": "ffdhe6144" 00:19:53.450 } 00:19:53.450 } 00:19:53.450 ]' 00:19:53.450 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.709 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.970 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:53.970 17:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:54.540 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:54.800 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:54.800 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.800 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.800 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.800 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.800 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.801 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.060 00:19:55.060 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.060 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.060 17:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.321 { 00:19:55.321 "cntlid": 87, 00:19:55.321 "qid": 0, 00:19:55.321 "state": "enabled", 00:19:55.321 "thread": "nvmf_tgt_poll_group_000", 00:19:55.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.321 "listen_address": { 00:19:55.321 "trtype": "TCP", 00:19:55.321 "adrfam": "IPv4", 00:19:55.321 "traddr": "10.0.0.2", 00:19:55.321 "trsvcid": "4420" 00:19:55.321 }, 00:19:55.321 "peer_address": { 00:19:55.321 "trtype": "TCP", 00:19:55.321 "adrfam": "IPv4", 00:19:55.321 "traddr": "10.0.0.1", 00:19:55.321 "trsvcid": "56760" 00:19:55.321 }, 00:19:55.321 "auth": { 00:19:55.321 "state": "completed", 00:19:55.321 "digest": "sha384", 00:19:55.321 "dhgroup": "ffdhe6144" 00:19:55.321 } 00:19:55.321 } 00:19:55.321 ]' 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.321 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.581 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:55.581 17:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.151 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.417 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.032 00:19:57.033 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.033 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.033 17:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.033 { 00:19:57.033 "cntlid": 89, 00:19:57.033 "qid": 0, 00:19:57.033 "state": "enabled", 00:19:57.033 "thread": "nvmf_tgt_poll_group_000", 00:19:57.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:57.033 "listen_address": { 00:19:57.033 "trtype": "TCP", 00:19:57.033 "adrfam": "IPv4", 00:19:57.033 "traddr": "10.0.0.2", 00:19:57.033 "trsvcid": "4420" 00:19:57.033 }, 00:19:57.033 "peer_address": { 00:19:57.033 "trtype": "TCP", 00:19:57.033 "adrfam": "IPv4", 00:19:57.033 "traddr": "10.0.0.1", 00:19:57.033 "trsvcid": "56790" 00:19:57.033 }, 00:19:57.033 "auth": { 00:19:57.033 "state": "completed", 00:19:57.033 "digest": "sha384", 00:19:57.033 "dhgroup": "ffdhe8192" 00:19:57.033 } 00:19:57.033 } 00:19:57.033 ]' 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.033 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:57.329 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:19:57.904 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.164 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.164 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.164 17:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.164 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.165 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.165 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.735 00:19:58.735 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.735 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.735 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.995 { 00:19:58.995 "cntlid": 91, 00:19:58.995 "qid": 0, 00:19:58.995 "state": "enabled", 00:19:58.995 "thread": "nvmf_tgt_poll_group_000", 00:19:58.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.995 "listen_address": { 00:19:58.995 "trtype": "TCP", 00:19:58.995 "adrfam": "IPv4", 00:19:58.995 "traddr": "10.0.0.2", 00:19:58.995 "trsvcid": "4420" 00:19:58.995 }, 00:19:58.995 "peer_address": { 00:19:58.995 "trtype": "TCP", 00:19:58.995 "adrfam": "IPv4", 00:19:58.995 "traddr": "10.0.0.1", 00:19:58.995 "trsvcid": "56810" 00:19:58.995 }, 00:19:58.995 "auth": { 00:19:58.995 "state": "completed", 00:19:58.995 "digest": "sha384", 00:19:58.995 "dhgroup": "ffdhe8192" 00:19:58.995 } 00:19:58.995 } 00:19:58.995 ]' 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.995 17:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.995 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.995 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.995 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.255 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:59.255 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.826 17:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.085 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.086 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.655 00:20:00.655 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.656 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.656 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.916 { 00:20:00.916 "cntlid": 93, 00:20:00.916 "qid": 0, 00:20:00.916 "state": "enabled", 00:20:00.916 "thread": "nvmf_tgt_poll_group_000", 00:20:00.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:00.916 "listen_address": { 00:20:00.916 "trtype": "TCP", 00:20:00.916 "adrfam": "IPv4", 00:20:00.916 "traddr": "10.0.0.2", 00:20:00.916 "trsvcid": "4420" 00:20:00.916 }, 00:20:00.916 "peer_address": { 00:20:00.916 "trtype": "TCP", 00:20:00.916 "adrfam": "IPv4", 00:20:00.916 "traddr": "10.0.0.1", 00:20:00.916 "trsvcid": "56832" 00:20:00.916 }, 00:20:00.916 "auth": { 00:20:00.916 "state": "completed", 00:20:00.916 "digest": "sha384", 00:20:00.916 "dhgroup": "ffdhe8192" 00:20:00.916 } 00:20:00.916 } 00:20:00.916 ]' 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.916 17:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.176 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:01.177 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.746 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:02.005 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.006 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.006 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.006 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.006 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.006 17:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.575 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.575 { 00:20:02.575 "cntlid": 95, 00:20:02.575 "qid": 0, 00:20:02.575 "state": "enabled", 00:20:02.575 "thread": "nvmf_tgt_poll_group_000", 00:20:02.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:02.575 "listen_address": { 00:20:02.575 "trtype": "TCP", 00:20:02.575 "adrfam": "IPv4", 00:20:02.575 "traddr": "10.0.0.2", 00:20:02.575 "trsvcid": "4420" 00:20:02.575 }, 00:20:02.575 "peer_address": { 00:20:02.575 "trtype": "TCP", 00:20:02.575 "adrfam": "IPv4", 00:20:02.575 "traddr": "10.0.0.1", 00:20:02.575 "trsvcid": "54310" 00:20:02.575 }, 00:20:02.575 "auth": { 00:20:02.575 "state": "completed", 00:20:02.575 "digest": "sha384", 00:20:02.575 "dhgroup": "ffdhe8192" 00:20:02.575 } 00:20:02.575 } 00:20:02.575 ]' 00:20:02.575 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.834 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.834 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.834 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.835 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.835 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.835 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.835 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.094 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:03.094 17:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.665 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.925 17:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.185 00:20:04.185 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.185 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.185 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.185 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.186 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.186 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.186 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.186 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.186 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.186 { 00:20:04.186 "cntlid": 97, 00:20:04.186 "qid": 0, 00:20:04.186 "state": "enabled", 00:20:04.186 "thread": "nvmf_tgt_poll_group_000", 00:20:04.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:04.186 "listen_address": { 00:20:04.186 "trtype": "TCP", 00:20:04.186 "adrfam": "IPv4", 00:20:04.186 "traddr": "10.0.0.2", 00:20:04.186 "trsvcid": "4420" 00:20:04.186 }, 00:20:04.186 "peer_address": { 00:20:04.186 "trtype": "TCP", 00:20:04.186 "adrfam": "IPv4", 00:20:04.186 "traddr": "10.0.0.1", 00:20:04.186 "trsvcid": "54336" 00:20:04.186 }, 00:20:04.186 "auth": { 00:20:04.186 "state": "completed", 00:20:04.186 "digest": "sha512", 00:20:04.186 "dhgroup": "null" 00:20:04.186 } 00:20:04.186 } 00:20:04.186 ]' 00:20:04.186 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.446 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.707 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:04.707 17:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.278 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.538 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.798 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.798 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.798 { 00:20:05.798 "cntlid": 99, 00:20:05.798 "qid": 0, 00:20:05.798 "state": "enabled", 00:20:05.798 "thread": "nvmf_tgt_poll_group_000", 00:20:05.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.798 "listen_address": { 00:20:05.798 "trtype": "TCP", 00:20:05.798 "adrfam": "IPv4", 00:20:05.798 "traddr": "10.0.0.2", 00:20:05.798 "trsvcid": "4420" 00:20:05.798 }, 00:20:05.798 "peer_address": { 00:20:05.798 "trtype": "TCP", 00:20:05.798 "adrfam": "IPv4", 00:20:05.798 "traddr": "10.0.0.1", 00:20:05.798 "trsvcid": "54372" 00:20:05.798 }, 00:20:05.798 "auth": { 00:20:05.798 "state": "completed", 00:20:05.798 "digest": "sha512", 00:20:05.798 "dhgroup": "null" 00:20:05.798 } 00:20:05.798 } 00:20:05.798 ]' 00:20:05.799 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.059 17:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.319 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:06.319 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:06.888 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:06.889 17:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.149 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.409 00:20:07.409 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.409 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.409 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.669 { 00:20:07.669 "cntlid": 101, 00:20:07.669 "qid": 0, 00:20:07.669 "state": "enabled", 00:20:07.669 "thread": "nvmf_tgt_poll_group_000", 00:20:07.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:07.669 "listen_address": { 00:20:07.669 "trtype": "TCP", 00:20:07.669 "adrfam": "IPv4", 00:20:07.669 "traddr": "10.0.0.2", 00:20:07.669 "trsvcid": "4420" 00:20:07.669 }, 00:20:07.669 "peer_address": { 00:20:07.669 "trtype": "TCP", 00:20:07.669 "adrfam": "IPv4", 00:20:07.669 "traddr": "10.0.0.1", 00:20:07.669 "trsvcid": "54406" 00:20:07.669 }, 00:20:07.669 "auth": { 00:20:07.669 "state": "completed", 00:20:07.669 "digest": "sha512", 00:20:07.669 "dhgroup": "null" 00:20:07.669 } 00:20:07.669 } 00:20:07.669 ]' 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.669 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.928 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:07.928 17:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:08.496 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.754 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.013 00:20:09.013 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.013 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.013 17:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.272 { 00:20:09.272 "cntlid": 103, 00:20:09.272 "qid": 0, 00:20:09.272 "state": "enabled", 00:20:09.272 "thread": "nvmf_tgt_poll_group_000", 00:20:09.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:09.272 "listen_address": { 00:20:09.272 "trtype": "TCP", 00:20:09.272 "adrfam": "IPv4", 00:20:09.272 "traddr": "10.0.0.2", 00:20:09.272 "trsvcid": "4420" 00:20:09.272 }, 00:20:09.272 "peer_address": { 00:20:09.272 "trtype": "TCP", 00:20:09.272 "adrfam": "IPv4", 00:20:09.272 "traddr": "10.0.0.1", 00:20:09.272 "trsvcid": "54432" 00:20:09.272 }, 00:20:09.272 "auth": { 00:20:09.272 "state": "completed", 00:20:09.272 "digest": "sha512", 00:20:09.272 "dhgroup": "null" 00:20:09.272 } 00:20:09.272 } 00:20:09.272 ]' 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.272 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.532 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:09.533 17:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.101 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.360 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.620 00:20:10.620 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.620 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.620 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.880 { 00:20:10.880 "cntlid": 105, 00:20:10.880 "qid": 0, 00:20:10.880 "state": "enabled", 00:20:10.880 "thread": "nvmf_tgt_poll_group_000", 00:20:10.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.880 "listen_address": { 00:20:10.880 "trtype": "TCP", 00:20:10.880 "adrfam": "IPv4", 00:20:10.880 "traddr": "10.0.0.2", 00:20:10.880 "trsvcid": "4420" 00:20:10.880 }, 00:20:10.880 "peer_address": { 00:20:10.880 "trtype": "TCP", 00:20:10.880 "adrfam": "IPv4", 00:20:10.880 "traddr": "10.0.0.1", 00:20:10.880 "trsvcid": "54458" 00:20:10.880 }, 00:20:10.880 "auth": { 00:20:10.880 "state": "completed", 00:20:10.880 "digest": "sha512", 00:20:10.880 "dhgroup": "ffdhe2048" 00:20:10.880 } 00:20:10.880 } 00:20:10.880 ]' 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.880 17:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.139 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:11.139 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:11.709 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.969 17:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.228 00:20:12.228 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.228 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.228 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.489 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.489 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.489 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.489 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.489 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.489 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.489 { 00:20:12.489 "cntlid": 107, 00:20:12.489 "qid": 0, 00:20:12.490 "state": "enabled", 00:20:12.490 "thread": "nvmf_tgt_poll_group_000", 00:20:12.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:12.490 "listen_address": { 00:20:12.490 "trtype": "TCP", 00:20:12.490 "adrfam": "IPv4", 00:20:12.490 "traddr": "10.0.0.2", 00:20:12.490 "trsvcid": "4420" 00:20:12.490 }, 00:20:12.490 "peer_address": { 00:20:12.490 "trtype": "TCP", 00:20:12.490 "adrfam": "IPv4", 00:20:12.490 "traddr": "10.0.0.1", 00:20:12.490 "trsvcid": "37978" 00:20:12.490 }, 00:20:12.490 "auth": { 00:20:12.490 "state": "completed", 00:20:12.490 "digest": "sha512", 00:20:12.490 "dhgroup": "ffdhe2048" 00:20:12.490 } 00:20:12.490 } 00:20:12.490 ]' 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.490 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.750 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:12.750 17:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.319 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.579 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.840 00:20:13.840 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.840 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.840 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.100 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.101 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.101 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.101 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.101 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.101 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.101 { 00:20:14.101 "cntlid": 109, 00:20:14.101 "qid": 0, 00:20:14.101 "state": "enabled", 00:20:14.101 "thread": "nvmf_tgt_poll_group_000", 00:20:14.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.101 "listen_address": { 00:20:14.101 "trtype": "TCP", 00:20:14.101 "adrfam": "IPv4", 00:20:14.101 "traddr": "10.0.0.2", 00:20:14.101 "trsvcid": "4420" 00:20:14.101 }, 00:20:14.101 "peer_address": { 00:20:14.101 "trtype": "TCP", 00:20:14.101 "adrfam": "IPv4", 00:20:14.101 "traddr": "10.0.0.1", 00:20:14.101 "trsvcid": "37994" 00:20:14.101 }, 00:20:14.101 "auth": { 00:20:14.101 "state": "completed", 00:20:14.101 "digest": "sha512", 00:20:14.101 "dhgroup": "ffdhe2048" 00:20:14.101 } 00:20:14.101 } 00:20:14.101 ]' 00:20:14.101 17:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.101 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.361 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:14.361 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:14.974 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.975 17:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.235 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.495 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.495 { 00:20:15.495 "cntlid": 111, 00:20:15.495 "qid": 0, 00:20:15.495 "state": "enabled", 00:20:15.495 "thread": "nvmf_tgt_poll_group_000", 00:20:15.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:15.495 "listen_address": { 00:20:15.495 "trtype": "TCP", 00:20:15.495 "adrfam": "IPv4", 00:20:15.495 "traddr": "10.0.0.2", 00:20:15.495 "trsvcid": "4420" 00:20:15.495 }, 00:20:15.495 "peer_address": { 00:20:15.495 "trtype": "TCP", 00:20:15.495 "adrfam": "IPv4", 00:20:15.495 "traddr": "10.0.0.1", 00:20:15.495 "trsvcid": "38026" 00:20:15.495 }, 00:20:15.495 "auth": { 00:20:15.495 "state": "completed", 00:20:15.495 "digest": "sha512", 00:20:15.495 "dhgroup": "ffdhe2048" 00:20:15.495 } 00:20:15.495 } 00:20:15.495 ]' 00:20:15.495 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.754 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.014 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:16.014 17:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.582 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.842 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.102 00:20:17.102 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.102 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.102 17:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.102 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.102 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.363 { 00:20:17.363 "cntlid": 113, 00:20:17.363 "qid": 0, 00:20:17.363 "state": "enabled", 00:20:17.363 "thread": "nvmf_tgt_poll_group_000", 00:20:17.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:17.363 "listen_address": { 00:20:17.363 "trtype": "TCP", 00:20:17.363 "adrfam": "IPv4", 00:20:17.363 "traddr": "10.0.0.2", 00:20:17.363 "trsvcid": "4420" 00:20:17.363 }, 00:20:17.363 "peer_address": { 00:20:17.363 "trtype": "TCP", 00:20:17.363 "adrfam": "IPv4", 00:20:17.363 "traddr": "10.0.0.1", 00:20:17.363 "trsvcid": "38056" 00:20:17.363 }, 00:20:17.363 "auth": { 00:20:17.363 "state": "completed", 00:20:17.363 "digest": "sha512", 00:20:17.363 "dhgroup": "ffdhe3072" 00:20:17.363 } 00:20:17.363 } 00:20:17.363 ]' 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.363 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.623 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:17.623 17:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:18.193 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.453 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.713 00:20:18.713 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.713 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.713 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.975 { 00:20:18.975 "cntlid": 115, 00:20:18.975 "qid": 0, 00:20:18.975 "state": "enabled", 00:20:18.975 "thread": "nvmf_tgt_poll_group_000", 00:20:18.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:18.975 "listen_address": { 00:20:18.975 "trtype": "TCP", 00:20:18.975 "adrfam": "IPv4", 00:20:18.975 "traddr": "10.0.0.2", 00:20:18.975 "trsvcid": "4420" 00:20:18.975 }, 00:20:18.975 "peer_address": { 00:20:18.975 "trtype": "TCP", 00:20:18.975 "adrfam": "IPv4", 00:20:18.975 "traddr": "10.0.0.1", 00:20:18.975 "trsvcid": "38088" 00:20:18.975 }, 00:20:18.975 "auth": { 00:20:18.975 "state": "completed", 00:20:18.975 "digest": "sha512", 00:20:18.975 "dhgroup": "ffdhe3072" 00:20:18.975 } 00:20:18.975 } 00:20:18.975 ]' 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.975 17:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.234 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:19.235 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.805 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.064 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.065 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.065 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.065 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.065 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.065 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.324 00:20:20.324 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.324 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.324 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.585 { 00:20:20.585 "cntlid": 117, 00:20:20.585 "qid": 0, 00:20:20.585 "state": "enabled", 00:20:20.585 "thread": "nvmf_tgt_poll_group_000", 00:20:20.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:20.585 "listen_address": { 00:20:20.585 "trtype": "TCP", 00:20:20.585 "adrfam": "IPv4", 00:20:20.585 "traddr": "10.0.0.2", 00:20:20.585 "trsvcid": "4420" 00:20:20.585 }, 00:20:20.585 "peer_address": { 00:20:20.585 "trtype": "TCP", 00:20:20.585 "adrfam": "IPv4", 00:20:20.585 "traddr": "10.0.0.1", 00:20:20.585 "trsvcid": "38114" 00:20:20.585 }, 00:20:20.585 "auth": { 00:20:20.585 "state": "completed", 00:20:20.585 "digest": "sha512", 00:20:20.585 "dhgroup": "ffdhe3072" 00:20:20.585 } 00:20:20.585 } 00:20:20.585 ]' 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.585 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.845 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:20.845 17:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.416 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.677 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.937 00:20:21.937 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.937 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.937 17:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.197 { 00:20:22.197 "cntlid": 119, 00:20:22.197 "qid": 0, 00:20:22.197 "state": "enabled", 00:20:22.197 "thread": "nvmf_tgt_poll_group_000", 00:20:22.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:22.197 "listen_address": { 00:20:22.197 "trtype": "TCP", 00:20:22.197 "adrfam": "IPv4", 00:20:22.197 "traddr": "10.0.0.2", 00:20:22.197 "trsvcid": "4420" 00:20:22.197 }, 00:20:22.197 "peer_address": { 00:20:22.197 "trtype": "TCP", 00:20:22.197 "adrfam": "IPv4", 00:20:22.197 "traddr": "10.0.0.1", 00:20:22.197 "trsvcid": "60702" 00:20:22.197 }, 00:20:22.197 "auth": { 00:20:22.197 "state": "completed", 00:20:22.197 "digest": "sha512", 00:20:22.197 "dhgroup": "ffdhe3072" 00:20:22.197 } 00:20:22.197 } 00:20:22.197 ]' 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.197 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.198 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.198 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.198 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.198 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.198 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.198 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.458 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:22.458 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:23.027 17:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.027 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.287 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.546 00:20:23.546 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.546 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.546 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.806 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.807 { 00:20:23.807 "cntlid": 121, 00:20:23.807 "qid": 0, 00:20:23.807 "state": "enabled", 00:20:23.807 "thread": "nvmf_tgt_poll_group_000", 00:20:23.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:23.807 "listen_address": { 00:20:23.807 "trtype": "TCP", 00:20:23.807 "adrfam": "IPv4", 00:20:23.807 "traddr": "10.0.0.2", 00:20:23.807 "trsvcid": "4420" 00:20:23.807 }, 00:20:23.807 "peer_address": { 00:20:23.807 "trtype": "TCP", 00:20:23.807 "adrfam": "IPv4", 00:20:23.807 "traddr": "10.0.0.1", 00:20:23.807 "trsvcid": "60726" 00:20:23.807 }, 00:20:23.807 "auth": { 00:20:23.807 "state": "completed", 00:20:23.807 "digest": "sha512", 00:20:23.807 "dhgroup": "ffdhe4096" 00:20:23.807 } 00:20:23.807 } 00:20:23.807 ]' 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.807 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.066 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:24.066 17:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.636 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.897 17:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.156 00:20:25.156 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.156 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.156 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.416 { 00:20:25.416 "cntlid": 123, 00:20:25.416 "qid": 0, 00:20:25.416 "state": "enabled", 00:20:25.416 "thread": "nvmf_tgt_poll_group_000", 00:20:25.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:25.416 "listen_address": { 00:20:25.416 "trtype": "TCP", 00:20:25.416 "adrfam": "IPv4", 00:20:25.416 "traddr": "10.0.0.2", 00:20:25.416 "trsvcid": "4420" 00:20:25.416 }, 00:20:25.416 "peer_address": { 00:20:25.416 "trtype": "TCP", 00:20:25.416 "adrfam": "IPv4", 00:20:25.416 "traddr": "10.0.0.1", 00:20:25.416 "trsvcid": "60754" 00:20:25.416 }, 00:20:25.416 "auth": { 00:20:25.416 "state": "completed", 00:20:25.416 "digest": "sha512", 00:20:25.416 "dhgroup": "ffdhe4096" 00:20:25.416 } 00:20:25.416 } 00:20:25.416 ]' 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.416 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.676 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:25.676 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.245 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.505 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.765 00:20:26.765 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.765 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.765 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.025 { 00:20:27.025 "cntlid": 125, 00:20:27.025 "qid": 0, 00:20:27.025 "state": "enabled", 00:20:27.025 "thread": "nvmf_tgt_poll_group_000", 00:20:27.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:27.025 "listen_address": { 00:20:27.025 "trtype": "TCP", 00:20:27.025 "adrfam": "IPv4", 00:20:27.025 "traddr": "10.0.0.2", 00:20:27.025 "trsvcid": "4420" 00:20:27.025 }, 00:20:27.025 "peer_address": { 00:20:27.025 "trtype": "TCP", 00:20:27.025 "adrfam": "IPv4", 00:20:27.025 "traddr": "10.0.0.1", 00:20:27.025 "trsvcid": "60776" 00:20:27.025 }, 00:20:27.025 "auth": { 00:20:27.025 "state": "completed", 00:20:27.025 "digest": "sha512", 00:20:27.025 "dhgroup": "ffdhe4096" 00:20:27.025 } 00:20:27.025 } 00:20:27.025 ]' 00:20:27.025 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.025 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.025 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.025 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.025 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.284 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.284 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.284 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.285 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:27.285 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.224 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.224 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.484 00:20:28.484 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.484 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.484 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.744 { 00:20:28.744 "cntlid": 127, 00:20:28.744 "qid": 0, 00:20:28.744 "state": "enabled", 00:20:28.744 "thread": "nvmf_tgt_poll_group_000", 00:20:28.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:28.744 "listen_address": { 00:20:28.744 "trtype": "TCP", 00:20:28.744 "adrfam": "IPv4", 00:20:28.744 "traddr": "10.0.0.2", 00:20:28.744 "trsvcid": "4420" 00:20:28.744 }, 00:20:28.744 "peer_address": { 00:20:28.744 "trtype": "TCP", 00:20:28.744 "adrfam": "IPv4", 00:20:28.744 "traddr": "10.0.0.1", 00:20:28.744 "trsvcid": "60798" 00:20:28.744 }, 00:20:28.744 "auth": { 00:20:28.744 "state": "completed", 00:20:28.744 "digest": "sha512", 00:20:28.744 "dhgroup": "ffdhe4096" 00:20:28.744 } 00:20:28.744 } 00:20:28.744 ]' 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.744 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.004 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:29.004 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.574 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.835 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.095 00:20:30.095 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.095 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.095 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.355 { 00:20:30.355 "cntlid": 129, 00:20:30.355 "qid": 0, 00:20:30.355 "state": "enabled", 00:20:30.355 "thread": "nvmf_tgt_poll_group_000", 00:20:30.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.355 "listen_address": { 00:20:30.355 "trtype": "TCP", 00:20:30.355 "adrfam": "IPv4", 00:20:30.355 "traddr": "10.0.0.2", 00:20:30.355 "trsvcid": "4420" 00:20:30.355 }, 00:20:30.355 "peer_address": { 00:20:30.355 "trtype": "TCP", 00:20:30.355 "adrfam": "IPv4", 00:20:30.355 "traddr": "10.0.0.1", 00:20:30.355 "trsvcid": "60820" 00:20:30.355 }, 00:20:30.355 "auth": { 00:20:30.355 "state": "completed", 00:20:30.355 "digest": "sha512", 00:20:30.355 "dhgroup": "ffdhe6144" 00:20:30.355 } 00:20:30.355 } 00:20:30.355 ]' 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.355 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.616 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.616 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.616 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.616 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:30.616 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.557 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.815 00:20:31.815 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.815 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.815 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.074 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.074 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.074 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.074 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.074 { 00:20:32.074 "cntlid": 131, 00:20:32.074 "qid": 0, 00:20:32.074 "state": "enabled", 00:20:32.074 "thread": "nvmf_tgt_poll_group_000", 00:20:32.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.074 "listen_address": { 00:20:32.074 "trtype": "TCP", 00:20:32.074 "adrfam": "IPv4", 00:20:32.074 "traddr": "10.0.0.2", 00:20:32.074 "trsvcid": "4420" 00:20:32.074 }, 00:20:32.074 "peer_address": { 00:20:32.074 "trtype": "TCP", 00:20:32.074 "adrfam": "IPv4", 00:20:32.074 "traddr": "10.0.0.1", 00:20:32.074 "trsvcid": "45274" 00:20:32.074 }, 00:20:32.074 "auth": { 00:20:32.074 "state": "completed", 00:20:32.074 "digest": "sha512", 00:20:32.074 "dhgroup": "ffdhe6144" 00:20:32.074 } 00:20:32.074 } 00:20:32.074 ]' 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.074 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.333 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.333 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.333 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.333 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:32.333 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.271 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.271 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.531 00:20:33.531 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.531 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.531 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.791 { 00:20:33.791 "cntlid": 133, 00:20:33.791 "qid": 0, 00:20:33.791 "state": "enabled", 00:20:33.791 "thread": "nvmf_tgt_poll_group_000", 00:20:33.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:33.791 "listen_address": { 00:20:33.791 "trtype": "TCP", 00:20:33.791 "adrfam": "IPv4", 00:20:33.791 "traddr": "10.0.0.2", 00:20:33.791 "trsvcid": "4420" 00:20:33.791 }, 00:20:33.791 "peer_address": { 00:20:33.791 "trtype": "TCP", 00:20:33.791 "adrfam": "IPv4", 00:20:33.791 "traddr": "10.0.0.1", 00:20:33.791 "trsvcid": "45320" 00:20:33.791 }, 00:20:33.791 "auth": { 00:20:33.791 "state": "completed", 00:20:33.791 "digest": "sha512", 00:20:33.791 "dhgroup": "ffdhe6144" 00:20:33.791 } 00:20:33.791 } 00:20:33.791 ]' 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.791 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.051 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.051 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:34.051 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:34.652 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.945 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.233 00:20:35.233 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.233 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.233 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.492 { 00:20:35.492 "cntlid": 135, 00:20:35.492 "qid": 0, 00:20:35.492 "state": "enabled", 00:20:35.492 "thread": "nvmf_tgt_poll_group_000", 00:20:35.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:35.492 "listen_address": { 00:20:35.492 "trtype": "TCP", 00:20:35.492 "adrfam": "IPv4", 00:20:35.492 "traddr": "10.0.0.2", 00:20:35.492 "trsvcid": "4420" 00:20:35.492 }, 00:20:35.492 "peer_address": { 00:20:35.492 "trtype": "TCP", 00:20:35.492 "adrfam": "IPv4", 00:20:35.492 "traddr": "10.0.0.1", 00:20:35.492 "trsvcid": "45346" 00:20:35.492 }, 00:20:35.492 "auth": { 00:20:35.492 "state": "completed", 00:20:35.492 "digest": "sha512", 00:20:35.492 "dhgroup": "ffdhe6144" 00:20:35.492 } 00:20:35.492 } 00:20:35.492 ]' 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.492 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.752 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.752 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.752 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.752 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:35.752 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.691 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.259 00:20:37.259 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.259 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.260 { 00:20:37.260 "cntlid": 137, 00:20:37.260 "qid": 0, 00:20:37.260 "state": "enabled", 00:20:37.260 "thread": "nvmf_tgt_poll_group_000", 00:20:37.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.260 "listen_address": { 00:20:37.260 "trtype": "TCP", 00:20:37.260 "adrfam": "IPv4", 00:20:37.260 "traddr": "10.0.0.2", 00:20:37.260 "trsvcid": "4420" 00:20:37.260 }, 00:20:37.260 "peer_address": { 00:20:37.260 "trtype": "TCP", 00:20:37.260 "adrfam": "IPv4", 00:20:37.260 "traddr": "10.0.0.1", 00:20:37.260 "trsvcid": "45376" 00:20:37.260 }, 00:20:37.260 "auth": { 00:20:37.260 "state": "completed", 00:20:37.260 "digest": "sha512", 00:20:37.260 "dhgroup": "ffdhe8192" 00:20:37.260 } 00:20:37.260 } 00:20:37.260 ]' 00:20:37.260 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.520 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.780 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:37.780 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.351 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.611 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.872 00:20:38.872 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.872 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.872 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.133 { 00:20:39.133 "cntlid": 139, 00:20:39.133 "qid": 0, 00:20:39.133 "state": "enabled", 00:20:39.133 "thread": "nvmf_tgt_poll_group_000", 00:20:39.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.133 "listen_address": { 00:20:39.133 "trtype": "TCP", 00:20:39.133 "adrfam": "IPv4", 00:20:39.133 "traddr": "10.0.0.2", 00:20:39.133 "trsvcid": "4420" 00:20:39.133 }, 00:20:39.133 "peer_address": { 00:20:39.133 "trtype": "TCP", 00:20:39.133 "adrfam": "IPv4", 00:20:39.133 "traddr": "10.0.0.1", 00:20:39.133 "trsvcid": "45402" 00:20:39.133 }, 00:20:39.133 "auth": { 00:20:39.133 "state": "completed", 00:20:39.133 "digest": "sha512", 00:20:39.133 "dhgroup": "ffdhe8192" 00:20:39.133 } 00:20:39.133 } 00:20:39.133 ]' 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.133 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:39.393 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: --dhchap-ctrl-secret DHHC-1:02:YzM0OWZjZjY1OTg2MGU5NzY1ZGU5Nzc2YTg4NWVjYTg1NWZlN2ZkNjE4ZjQ0YTQ4/e2CGg==: 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.334 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.904 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.904 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.168 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.168 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.168 { 00:20:41.168 "cntlid": 141, 00:20:41.168 "qid": 0, 00:20:41.168 "state": "enabled", 00:20:41.168 "thread": "nvmf_tgt_poll_group_000", 00:20:41.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.168 "listen_address": { 00:20:41.168 "trtype": "TCP", 00:20:41.168 "adrfam": "IPv4", 00:20:41.168 "traddr": "10.0.0.2", 00:20:41.168 "trsvcid": "4420" 00:20:41.168 }, 00:20:41.168 "peer_address": { 00:20:41.168 "trtype": "TCP", 00:20:41.168 "adrfam": "IPv4", 00:20:41.168 "traddr": "10.0.0.1", 00:20:41.168 "trsvcid": "45440" 00:20:41.168 }, 00:20:41.168 "auth": { 00:20:41.168 "state": "completed", 00:20:41.168 "digest": "sha512", 00:20:41.168 "dhgroup": "ffdhe8192" 00:20:41.168 } 00:20:41.168 } 00:20:41.168 ]' 00:20:41.168 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.168 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.431 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:41.431 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:01:YmQzMzFiYTc2ZTk1M2MyMGFmZGVlY2VkNGQwMTdjMWGe+9q6: 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.999 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.259 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.830 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.830 { 00:20:42.830 "cntlid": 143, 00:20:42.830 "qid": 0, 00:20:42.830 "state": "enabled", 00:20:42.830 "thread": "nvmf_tgt_poll_group_000", 00:20:42.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:42.830 "listen_address": { 00:20:42.830 "trtype": "TCP", 00:20:42.830 "adrfam": "IPv4", 00:20:42.830 "traddr": "10.0.0.2", 00:20:42.830 "trsvcid": "4420" 00:20:42.830 }, 00:20:42.830 "peer_address": { 00:20:42.830 "trtype": "TCP", 00:20:42.830 "adrfam": "IPv4", 00:20:42.830 "traddr": "10.0.0.1", 00:20:42.830 "trsvcid": "41046" 00:20:42.830 }, 00:20:42.830 "auth": { 00:20:42.830 "state": "completed", 00:20:42.830 "digest": "sha512", 00:20:42.830 "dhgroup": "ffdhe8192" 00:20:42.830 } 00:20:42.830 } 00:20:42.830 ]' 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.830 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.090 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.090 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.090 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.090 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:43.090 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.031 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.601 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.601 { 00:20:44.601 "cntlid": 145, 00:20:44.601 "qid": 0, 00:20:44.601 "state": "enabled", 00:20:44.601 "thread": "nvmf_tgt_poll_group_000", 00:20:44.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:44.601 "listen_address": { 00:20:44.601 "trtype": "TCP", 00:20:44.601 "adrfam": "IPv4", 00:20:44.601 "traddr": "10.0.0.2", 00:20:44.601 "trsvcid": "4420" 00:20:44.601 }, 00:20:44.601 "peer_address": { 00:20:44.601 "trtype": "TCP", 00:20:44.601 "adrfam": "IPv4", 00:20:44.601 "traddr": "10.0.0.1", 00:20:44.601 "trsvcid": "41062" 00:20:44.601 }, 00:20:44.601 "auth": { 00:20:44.601 "state": "completed", 00:20:44.601 "digest": "sha512", 00:20:44.601 "dhgroup": "ffdhe8192" 00:20:44.601 } 00:20:44.601 } 00:20:44.601 ]' 00:20:44.601 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.860 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.119 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:45.119 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDc1ZWQyYjIxMTdhMWU5MzgxZDRkOTQ0MTVkMDkzYWM0ZWJmMzZkMGRiMzE4OThlt7WMNA==: --dhchap-ctrl-secret DHHC-1:03:NTNkYmE3MmQyZjhmNzVlMGRkMjA5ODI1M2NhMGYxOWMyZmIzM2E4ZWI0NzRlYzQwZWU0YjAzYmU5NmI4ZTQ2ZuVJfrE=: 00:20:45.687 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:45.688 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:46.257 request: 00:20:46.257 { 00:20:46.257 "name": "nvme0", 00:20:46.257 "trtype": "tcp", 00:20:46.257 "traddr": "10.0.0.2", 00:20:46.257 "adrfam": "ipv4", 00:20:46.257 "trsvcid": "4420", 00:20:46.257 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:46.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:46.257 "prchk_reftag": false, 00:20:46.257 "prchk_guard": false, 00:20:46.257 "hdgst": false, 00:20:46.257 "ddgst": false, 00:20:46.257 "dhchap_key": "key2", 00:20:46.257 "allow_unrecognized_csi": false, 00:20:46.257 "method": "bdev_nvme_attach_controller", 00:20:46.257 "req_id": 1 00:20:46.257 } 00:20:46.257 Got JSON-RPC error response 00:20:46.257 response: 00:20:46.257 { 00:20:46.257 "code": -5, 00:20:46.257 "message": "Input/output error" 00:20:46.257 } 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.257 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:46.258 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.258 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:46.258 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.258 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.258 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.258 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:46.517 request: 00:20:46.517 { 00:20:46.517 "name": "nvme0", 00:20:46.517 "trtype": "tcp", 00:20:46.517 "traddr": "10.0.0.2", 00:20:46.517 "adrfam": "ipv4", 00:20:46.517 "trsvcid": "4420", 00:20:46.517 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:46.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:46.517 "prchk_reftag": false, 00:20:46.517 "prchk_guard": false, 00:20:46.517 "hdgst": false, 00:20:46.517 "ddgst": false, 00:20:46.518 "dhchap_key": "key1", 00:20:46.518 "dhchap_ctrlr_key": "ckey2", 00:20:46.518 "allow_unrecognized_csi": false, 00:20:46.518 "method": "bdev_nvme_attach_controller", 00:20:46.518 "req_id": 1 00:20:46.518 } 00:20:46.518 Got JSON-RPC error response 00:20:46.518 response: 00:20:46.518 { 00:20:46.518 "code": -5, 00:20:46.518 "message": "Input/output error" 00:20:46.518 } 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.518 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.087 request: 00:20:47.087 { 00:20:47.087 "name": "nvme0", 00:20:47.087 "trtype": "tcp", 00:20:47.087 "traddr": "10.0.0.2", 00:20:47.087 "adrfam": "ipv4", 00:20:47.087 "trsvcid": "4420", 00:20:47.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:47.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.087 "prchk_reftag": false, 00:20:47.087 "prchk_guard": false, 00:20:47.087 "hdgst": false, 00:20:47.087 "ddgst": false, 00:20:47.087 "dhchap_key": "key1", 00:20:47.087 "dhchap_ctrlr_key": "ckey1", 00:20:47.087 "allow_unrecognized_csi": false, 00:20:47.087 "method": "bdev_nvme_attach_controller", 00:20:47.087 "req_id": 1 00:20:47.087 } 00:20:47.087 Got JSON-RPC error response 00:20:47.087 response: 00:20:47.087 { 00:20:47.087 "code": -5, 00:20:47.087 "message": "Input/output error" 00:20:47.087 } 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1657430 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1657430 ']' 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1657430 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657430 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657430' 00:20:47.087 killing process with pid 1657430 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1657430 00:20:47.087 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1657430 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1660848 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1660848 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1660848 ']' 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.347 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1660848 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1660848 ']' 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.288 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.288 null0 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5A7 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hpf ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hpf 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vsG 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.5DM ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5DM 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5Ng 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.qr1 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qr1 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.w2u 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.549 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.119 nvme0n1 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.380 { 00:20:49.380 "cntlid": 1, 00:20:49.380 "qid": 0, 00:20:49.380 "state": "enabled", 00:20:49.380 "thread": "nvmf_tgt_poll_group_000", 00:20:49.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.380 "listen_address": { 00:20:49.380 "trtype": "TCP", 00:20:49.380 "adrfam": "IPv4", 00:20:49.380 "traddr": "10.0.0.2", 00:20:49.380 "trsvcid": "4420" 00:20:49.380 }, 00:20:49.380 "peer_address": { 00:20:49.380 "trtype": "TCP", 00:20:49.380 "adrfam": "IPv4", 00:20:49.380 "traddr": "10.0.0.1", 00:20:49.380 "trsvcid": "41124" 00:20:49.380 }, 00:20:49.380 "auth": { 00:20:49.380 "state": "completed", 00:20:49.380 "digest": "sha512", 00:20:49.380 "dhgroup": "ffdhe8192" 00:20:49.380 } 00:20:49.380 } 00:20:49.380 ]' 00:20:49.380 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.641 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.901 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:49.901 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:50.472 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.733 request: 00:20:50.733 { 00:20:50.733 "name": "nvme0", 00:20:50.733 "trtype": "tcp", 00:20:50.733 "traddr": "10.0.0.2", 00:20:50.733 "adrfam": "ipv4", 00:20:50.733 "trsvcid": "4420", 00:20:50.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:50.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:50.733 "prchk_reftag": false, 00:20:50.733 "prchk_guard": false, 00:20:50.733 "hdgst": false, 00:20:50.733 "ddgst": false, 00:20:50.733 "dhchap_key": "key3", 00:20:50.733 "allow_unrecognized_csi": false, 00:20:50.733 "method": "bdev_nvme_attach_controller", 00:20:50.733 "req_id": 1 00:20:50.733 } 00:20:50.733 Got JSON-RPC error response 00:20:50.733 response: 00:20:50.733 { 00:20:50.733 "code": -5, 00:20:50.733 "message": "Input/output error" 00:20:50.733 } 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:50.733 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.994 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.254 request: 00:20:51.254 { 00:20:51.254 "name": "nvme0", 00:20:51.254 "trtype": "tcp", 00:20:51.255 "traddr": "10.0.0.2", 00:20:51.255 "adrfam": "ipv4", 00:20:51.255 "trsvcid": "4420", 00:20:51.255 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.255 "prchk_reftag": false, 00:20:51.255 "prchk_guard": false, 00:20:51.255 "hdgst": false, 00:20:51.255 "ddgst": false, 00:20:51.255 "dhchap_key": "key3", 00:20:51.255 "allow_unrecognized_csi": false, 00:20:51.255 "method": "bdev_nvme_attach_controller", 00:20:51.255 "req_id": 1 00:20:51.255 } 00:20:51.255 Got JSON-RPC error response 00:20:51.255 response: 00:20:51.255 { 00:20:51.255 "code": -5, 00:20:51.255 "message": "Input/output error" 00:20:51.255 } 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.255 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.515 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.775 request: 00:20:51.775 { 00:20:51.775 "name": "nvme0", 00:20:51.776 "trtype": "tcp", 00:20:51.776 "traddr": "10.0.0.2", 00:20:51.776 "adrfam": "ipv4", 00:20:51.776 "trsvcid": "4420", 00:20:51.776 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.776 "prchk_reftag": false, 00:20:51.776 "prchk_guard": false, 00:20:51.776 "hdgst": false, 00:20:51.776 "ddgst": false, 00:20:51.776 "dhchap_key": "key0", 00:20:51.776 "dhchap_ctrlr_key": "key1", 00:20:51.776 "allow_unrecognized_csi": false, 00:20:51.776 "method": "bdev_nvme_attach_controller", 00:20:51.776 "req_id": 1 00:20:51.776 } 00:20:51.776 Got JSON-RPC error response 00:20:51.776 response: 00:20:51.776 { 00:20:51.776 "code": -5, 00:20:51.776 "message": "Input/output error" 00:20:51.776 } 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:51.776 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:52.036 nvme0n1 00:20:52.036 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:52.036 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:52.036 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.036 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:52.296 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:53.234 nvme0n1 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:53.234 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.494 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.494 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:53.494 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: --dhchap-ctrl-secret DHHC-1:03:OWY5YTczZDAwMWNiMzJlMjgwMDJjMzM4OTA4NTBlZDA2MzZjZDkwZTczZjhlZTM2YTU3MDEyOWNjOTFhM2EwMq4UQ0Q=: 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.064 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:54.324 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:54.585 request: 00:20:54.585 { 00:20:54.585 "name": "nvme0", 00:20:54.585 "trtype": "tcp", 00:20:54.585 "traddr": "10.0.0.2", 00:20:54.585 "adrfam": "ipv4", 00:20:54.585 "trsvcid": "4420", 00:20:54.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.585 "prchk_reftag": false, 00:20:54.585 "prchk_guard": false, 00:20:54.585 "hdgst": false, 00:20:54.585 "ddgst": false, 00:20:54.585 "dhchap_key": "key1", 00:20:54.585 "allow_unrecognized_csi": false, 00:20:54.585 "method": "bdev_nvme_attach_controller", 00:20:54.585 "req_id": 1 00:20:54.585 } 00:20:54.585 Got JSON-RPC error response 00:20:54.585 response: 00:20:54.585 { 00:20:54.585 "code": -5, 00:20:54.585 "message": "Input/output error" 00:20:54.585 } 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:54.585 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:55.526 nvme0n1 00:20:55.526 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:55.526 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.526 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:55.526 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.526 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.526 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:55.787 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:56.045 nvme0n1 00:20:56.045 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:56.046 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.046 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:56.305 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.305 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.305 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.305 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:56.305 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.305 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: '' 2s 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: ]] 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGVkNjZjMTdiMWMwNWFmMGZmNTk4NGJjM2ZhYzM4ZWOf/l6u: 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:56.564 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:58.472 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:58.472 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:58.472 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:58.472 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:58.472 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: 2s 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: ]] 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MWYwYmM5YTlmZmUwMmE4NWQ4Nzk3NzBkYTA4NDMxNzZhYTQwNzBmOTRkZTMyYjVmCWfGsg==: 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:58.473 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:00.378 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:00.378 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:00.378 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:00.378 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:00.637 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:01.206 nvme0n1 00:21:01.206 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.206 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.518 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.518 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.518 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.778 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:01.778 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:01.778 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:02.038 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:02.038 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:02.038 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:02.038 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.296 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.863 request: 00:21:02.863 { 00:21:02.863 "name": "nvme0", 00:21:02.863 "dhchap_key": "key1", 00:21:02.863 "dhchap_ctrlr_key": "key3", 00:21:02.863 "method": "bdev_nvme_set_keys", 00:21:02.863 "req_id": 1 00:21:02.863 } 00:21:02.863 Got JSON-RPC error response 00:21:02.863 response: 00:21:02.863 { 00:21:02.863 "code": -13, 00:21:02.863 "message": "Permission denied" 00:21:02.863 } 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:02.863 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:04.258 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:04.258 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:04.258 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:04.258 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:04.830 nvme0n1 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:04.830 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:05.400 request: 00:21:05.400 { 00:21:05.400 "name": "nvme0", 00:21:05.400 "dhchap_key": "key2", 00:21:05.400 "dhchap_ctrlr_key": "key0", 00:21:05.400 "method": "bdev_nvme_set_keys", 00:21:05.400 "req_id": 1 00:21:05.400 } 00:21:05.400 Got JSON-RPC error response 00:21:05.400 response: 00:21:05.400 { 00:21:05.400 "code": -13, 00:21:05.400 "message": "Permission denied" 00:21:05.400 } 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:05.400 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.661 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:05.661 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:06.602 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:06.602 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:06.602 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1657460 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1657460 ']' 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1657460 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657460 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657460' 00:21:06.862 killing process with pid 1657460 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1657460 00:21:06.862 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1657460 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.123 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.123 rmmod nvme_tcp 00:21:07.123 rmmod nvme_fabrics 00:21:07.123 rmmod nvme_keyring 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1660848 ']' 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1660848 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1660848 ']' 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1660848 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1660848 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1660848' 00:21:07.123 killing process with pid 1660848 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1660848 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1660848 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.123 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.384 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.384 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:07.384 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.385 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5A7 /tmp/spdk.key-sha256.vsG /tmp/spdk.key-sha384.5Ng /tmp/spdk.key-sha512.w2u /tmp/spdk.key-sha512.hpf /tmp/spdk.key-sha384.5DM /tmp/spdk.key-sha256.qr1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:09.296 00:21:09.296 real 2m36.980s 00:21:09.296 user 5m54.008s 00:21:09.296 sys 0m24.517s 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.296 ************************************ 00:21:09.296 END TEST nvmf_auth_target 00:21:09.296 ************************************ 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:09.296 ************************************ 00:21:09.296 START TEST nvmf_bdevio_no_huge 00:21:09.296 ************************************ 00:21:09.296 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:09.557 * Looking for test storage... 00:21:09.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.557 --rc genhtml_branch_coverage=1 00:21:09.557 --rc genhtml_function_coverage=1 00:21:09.557 --rc genhtml_legend=1 00:21:09.557 --rc geninfo_all_blocks=1 00:21:09.557 --rc geninfo_unexecuted_blocks=1 00:21:09.557 00:21:09.557 ' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.557 --rc genhtml_branch_coverage=1 00:21:09.557 --rc genhtml_function_coverage=1 00:21:09.557 --rc genhtml_legend=1 00:21:09.557 --rc geninfo_all_blocks=1 00:21:09.557 --rc geninfo_unexecuted_blocks=1 00:21:09.557 00:21:09.557 ' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.557 --rc genhtml_branch_coverage=1 00:21:09.557 --rc genhtml_function_coverage=1 00:21:09.557 --rc genhtml_legend=1 00:21:09.557 --rc geninfo_all_blocks=1 00:21:09.557 --rc geninfo_unexecuted_blocks=1 00:21:09.557 00:21:09.557 ' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.557 --rc genhtml_branch_coverage=1 00:21:09.557 --rc genhtml_function_coverage=1 00:21:09.557 --rc genhtml_legend=1 00:21:09.557 --rc geninfo_all_blocks=1 00:21:09.557 --rc geninfo_unexecuted_blocks=1 00:21:09.557 00:21:09.557 ' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.557 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.696 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:17.696 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:17.697 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:17.697 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:17.697 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:17.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:21:17.697 00:21:17.697 --- 10.0.0.2 ping statistics --- 00:21:17.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.697 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:21:17.697 00:21:17.697 --- 10.0.0.1 ping statistics --- 00:21:17.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.697 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.697 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1663675 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1663675 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1663675 ']' 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.698 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.698 [2024-12-06 17:37:09.050001] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:17.698 [2024-12-06 17:37:09.050073] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:17.698 [2024-12-06 17:37:09.154503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.698 [2024-12-06 17:37:09.214974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.698 [2024-12-06 17:37:09.215021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.698 [2024-12-06 17:37:09.215030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.698 [2024-12-06 17:37:09.215037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.698 [2024-12-06 17:37:09.215043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.698 [2024-12-06 17:37:09.216525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.698 [2024-12-06 17:37:09.216758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:17.698 [2024-12-06 17:37:09.217045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:17.698 [2024-12-06 17:37:09.217142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 [2024-12-06 17:37:09.919020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 Malloc0 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 [2024-12-06 17:37:09.972824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.960 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.960 { 00:21:17.960 "params": { 00:21:17.961 "name": "Nvme$subsystem", 00:21:17.961 "trtype": "$TEST_TRANSPORT", 00:21:17.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.961 "adrfam": "ipv4", 00:21:17.961 "trsvcid": "$NVMF_PORT", 00:21:17.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.961 "hdgst": ${hdgst:-false}, 00:21:17.961 "ddgst": ${ddgst:-false} 00:21:17.961 }, 00:21:17.961 "method": "bdev_nvme_attach_controller" 00:21:17.961 } 00:21:17.961 EOF 00:21:17.961 )") 00:21:17.961 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:17.961 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:17.961 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:17.961 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:17.961 "params": { 00:21:17.961 "name": "Nvme1", 00:21:17.961 "trtype": "tcp", 00:21:17.961 "traddr": "10.0.0.2", 00:21:17.961 "adrfam": "ipv4", 00:21:17.961 "trsvcid": "4420", 00:21:17.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.961 "hdgst": false, 00:21:17.961 "ddgst": false 00:21:17.961 }, 00:21:17.961 "method": "bdev_nvme_attach_controller" 00:21:17.961 }' 00:21:18.223 [2024-12-06 17:37:10.033520] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:18.223 [2024-12-06 17:37:10.033591] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1663712 ] 00:21:18.223 [2024-12-06 17:37:10.132695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:18.223 [2024-12-06 17:37:10.195898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.223 [2024-12-06 17:37:10.196067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.223 [2024-12-06 17:37:10.196067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.486 I/O targets: 00:21:18.486 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:18.486 00:21:18.486 00:21:18.486 CUnit - A unit testing framework for C - Version 2.1-3 00:21:18.486 http://cunit.sourceforge.net/ 00:21:18.486 00:21:18.486 00:21:18.486 Suite: bdevio tests on: Nvme1n1 00:21:18.748 Test: blockdev write read block ...passed 00:21:18.748 Test: blockdev write zeroes read block ...passed 00:21:18.748 Test: blockdev write zeroes read no split ...passed 00:21:18.748 Test: blockdev write zeroes read split ...passed 00:21:18.748 Test: blockdev write zeroes read split partial ...passed 00:21:18.748 Test: blockdev reset ...[2024-12-06 17:37:10.647392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:18.748 [2024-12-06 17:37:10.647502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183b430 (9): Bad file descriptor 00:21:18.748 [2024-12-06 17:37:10.752669] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:18.748 passed 00:21:18.748 Test: blockdev write read 8 blocks ...passed 00:21:18.748 Test: blockdev write read size > 128k ...passed 00:21:18.748 Test: blockdev write read invalid size ...passed 00:21:19.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:19.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:19.009 Test: blockdev write read max offset ...passed 00:21:19.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:19.009 Test: blockdev writev readv 8 blocks ...passed 00:21:19.009 Test: blockdev writev readv 30 x 1block ...passed 00:21:19.009 Test: blockdev writev readv block ...passed 00:21:19.009 Test: blockdev writev readv size > 128k ...passed 00:21:19.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:19.009 Test: blockdev comparev and writev ...[2024-12-06 17:37:11.013485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.013532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.013549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.013558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.013856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.013871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.013896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.014307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.014322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.014336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.014347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.014738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.014752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.009 [2024-12-06 17:37:11.014766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.009 [2024-12-06 17:37:11.014775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.009 passed 00:21:19.270 Test: blockdev nvme passthru rw ...passed 00:21:19.270 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:37:11.099219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.270 [2024-12-06 17:37:11.099238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.270 [2024-12-06 17:37:11.099451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.270 [2024-12-06 17:37:11.099475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.270 [2024-12-06 17:37:11.099700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.270 [2024-12-06 17:37:11.099712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.270 [2024-12-06 17:37:11.099929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.270 [2024-12-06 17:37:11.099941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.270 passed 00:21:19.270 Test: blockdev nvme admin passthru ...passed 00:21:19.270 Test: blockdev copy ...passed 00:21:19.270 00:21:19.270 Run Summary: Type Total Ran Passed Failed Inactive 00:21:19.270 suites 1 1 n/a 0 0 00:21:19.270 tests 23 23 23 0 0 00:21:19.270 asserts 152 152 152 0 n/a 00:21:19.270 00:21:19.270 Elapsed time = 1.328 seconds 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:19.531 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.532 rmmod nvme_tcp 00:21:19.532 rmmod nvme_fabrics 00:21:19.532 rmmod nvme_keyring 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1663675 ']' 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1663675 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1663675 ']' 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1663675 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.532 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663675 00:21:19.792 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:19.792 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:19.792 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663675' 00:21:19.792 killing process with pid 1663675 00:21:19.792 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1663675 00:21:19.792 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1663675 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.053 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.054 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:22.054 00:21:22.054 real 0m12.607s 00:21:22.054 user 0m15.233s 00:21:22.054 sys 0m6.673s 00:21:22.054 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.054 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:22.054 ************************************ 00:21:22.054 END TEST nvmf_bdevio_no_huge 00:21:22.054 ************************************ 00:21:22.054 17:37:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:22.054 17:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.054 17:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.054 17:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.054 ************************************ 00:21:22.054 START TEST nvmf_tls 00:21:22.054 ************************************ 00:21:22.054 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:22.320 * Looking for test storage... 00:21:22.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.320 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.321 --rc genhtml_branch_coverage=1 00:21:22.321 --rc genhtml_function_coverage=1 00:21:22.321 --rc genhtml_legend=1 00:21:22.321 --rc geninfo_all_blocks=1 00:21:22.321 --rc geninfo_unexecuted_blocks=1 00:21:22.321 00:21:22.321 ' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.321 --rc genhtml_branch_coverage=1 00:21:22.321 --rc genhtml_function_coverage=1 00:21:22.321 --rc genhtml_legend=1 00:21:22.321 --rc geninfo_all_blocks=1 00:21:22.321 --rc geninfo_unexecuted_blocks=1 00:21:22.321 00:21:22.321 ' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.321 --rc genhtml_branch_coverage=1 00:21:22.321 --rc genhtml_function_coverage=1 00:21:22.321 --rc genhtml_legend=1 00:21:22.321 --rc geninfo_all_blocks=1 00:21:22.321 --rc geninfo_unexecuted_blocks=1 00:21:22.321 00:21:22.321 ' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:22.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.321 --rc genhtml_branch_coverage=1 00:21:22.321 --rc genhtml_function_coverage=1 00:21:22.321 --rc genhtml_legend=1 00:21:22.321 --rc geninfo_all_blocks=1 00:21:22.321 --rc geninfo_unexecuted_blocks=1 00:21:22.321 00:21:22.321 ' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.321 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:30.486 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:30.486 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:30.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.486 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:30.487 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:21:30.487 00:21:30.487 --- 10.0.0.2 ping statistics --- 00:21:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.487 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:21:30.487 00:21:30.487 --- 10.0.0.1 ping statistics --- 00:21:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.487 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1666181 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1666181 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666181 ']' 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.487 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.487 [2024-12-06 17:37:21.769907] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:30.487 [2024-12-06 17:37:21.769972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.487 [2024-12-06 17:37:21.870338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.487 [2024-12-06 17:37:21.920499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.487 [2024-12-06 17:37:21.920552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.487 [2024-12-06 17:37:21.920561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.487 [2024-12-06 17:37:21.920576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.487 [2024-12-06 17:37:21.920582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.487 [2024-12-06 17:37:21.921328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:30.748 true 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.748 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:31.009 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:31.009 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:31.009 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:31.270 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.270 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:31.529 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:31.529 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:31.529 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:31.529 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.529 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:31.790 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:31.790 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:31.790 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.790 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:32.050 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:32.050 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:32.050 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:32.050 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:32.050 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:32.310 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:32.310 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:32.310 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:32.570 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zqgP2MhczP 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.hsaaxWXvV6 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zqgP2MhczP 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.hsaaxWXvV6 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:32.830 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:33.089 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zqgP2MhczP 00:21:33.089 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zqgP2MhczP 00:21:33.089 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:33.348 [2024-12-06 17:37:25.219994] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.348 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:33.348 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.606 [2024-12-06 17:37:25.556813] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.606 [2024-12-06 17:37:25.557014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.606 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.864 malloc0 00:21:33.864 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.864 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zqgP2MhczP 00:21:34.123 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:34.382 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zqgP2MhczP 00:21:44.373 Initializing NVMe Controllers 00:21:44.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:44.373 Initialization complete. Launching workers. 00:21:44.373 ======================================================== 00:21:44.373 Latency(us) 00:21:44.373 Device Information : IOPS MiB/s Average min max 00:21:44.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18710.65 73.09 3420.71 1226.14 5367.39 00:21:44.373 ======================================================== 00:21:44.373 Total : 18710.65 73.09 3420.71 1226.14 5367.39 00:21:44.373 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqgP2MhczP 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zqgP2MhczP 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666424 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666424 /var/tmp/bdevperf.sock 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666424 ']' 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.373 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.374 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.374 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.374 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.374 [2024-12-06 17:37:36.435268] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:44.374 [2024-12-06 17:37:36.435323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666424 ] 00:21:44.633 [2024-12-06 17:37:36.524499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.633 [2024-12-06 17:37:36.559582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.204 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.204 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:45.204 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqgP2MhczP 00:21:45.464 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:45.464 [2024-12-06 17:37:37.528022] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.724 TLSTESTn1 00:21:45.724 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:45.724 Running I/O for 10 seconds... 00:21:48.045 6095.00 IOPS, 23.81 MiB/s [2024-12-06T16:37:41.050Z] 5845.50 IOPS, 22.83 MiB/s [2024-12-06T16:37:41.990Z] 5557.67 IOPS, 21.71 MiB/s [2024-12-06T16:37:42.929Z] 5427.25 IOPS, 21.20 MiB/s [2024-12-06T16:37:43.920Z] 5465.40 IOPS, 21.35 MiB/s [2024-12-06T16:37:44.862Z] 5533.67 IOPS, 21.62 MiB/s [2024-12-06T16:37:45.801Z] 5575.86 IOPS, 21.78 MiB/s [2024-12-06T16:37:46.742Z] 5659.12 IOPS, 22.11 MiB/s [2024-12-06T16:37:48.128Z] 5681.11 IOPS, 22.19 MiB/s [2024-12-06T16:37:48.128Z] 5739.10 IOPS, 22.42 MiB/s 00:21:56.062 Latency(us) 00:21:56.062 [2024-12-06T16:37:48.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.062 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.062 Verification LBA range: start 0x0 length 0x2000 00:21:56.062 TLSTESTn1 : 10.01 5744.34 22.44 0.00 0.00 22249.31 5324.80 29928.11 00:21:56.062 [2024-12-06T16:37:48.128Z] =================================================================================================================== 00:21:56.062 [2024-12-06T16:37:48.128Z] Total : 5744.34 22.44 0.00 0.00 22249.31 5324.80 29928.11 00:21:56.062 { 00:21:56.062 "results": [ 00:21:56.062 { 00:21:56.062 "job": "TLSTESTn1", 00:21:56.062 "core_mask": "0x4", 00:21:56.062 "workload": "verify", 00:21:56.062 "status": "finished", 00:21:56.062 "verify_range": { 00:21:56.062 "start": 0, 00:21:56.062 "length": 8192 00:21:56.062 }, 00:21:56.062 "queue_depth": 128, 00:21:56.062 "io_size": 4096, 00:21:56.062 "runtime": 10.012809, 00:21:56.062 "iops": 5744.342072239669, 00:21:56.062 "mibps": 22.438836219686205, 00:21:56.062 "io_failed": 0, 00:21:56.062 "io_timeout": 0, 00:21:56.062 "avg_latency_us": 22249.306499295864, 00:21:56.062 "min_latency_us": 5324.8, 00:21:56.062 "max_latency_us": 29928.106666666667 00:21:56.062 } 00:21:56.062 ], 00:21:56.062 "core_count": 1 00:21:56.062 } 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1666424 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666424 ']' 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666424 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666424 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666424' 00:21:56.062 killing process with pid 1666424 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666424 00:21:56.062 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.062 00:21:56.062 Latency(us) 00:21:56.062 [2024-12-06T16:37:48.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.062 [2024-12-06T16:37:48.128Z] =================================================================================================================== 00:21:56.062 [2024-12-06T16:37:48.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666424 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hsaaxWXvV6 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hsaaxWXvV6 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hsaaxWXvV6 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hsaaxWXvV6 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666571 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666571 /var/tmp/bdevperf.sock 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666571 ']' 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.062 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.063 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.063 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.063 [2024-12-06 17:37:47.988030] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:56.063 [2024-12-06 17:37:47.988088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666571 ] 00:21:56.063 [2024-12-06 17:37:48.068952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.063 [2024-12-06 17:37:48.097744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.003 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.003 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:57.003 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hsaaxWXvV6 00:21:57.004 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:57.264 [2024-12-06 17:37:49.109575] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.264 [2024-12-06 17:37:49.120932] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.264 [2024-12-06 17:37:49.121701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e5800 (107): Transport endpoint is not connected 00:21:57.265 [2024-12-06 17:37:49.122697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e5800 (9): Bad file descriptor 00:21:57.265 [2024-12-06 17:37:49.123699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:57.265 [2024-12-06 17:37:49.123706] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:57.265 [2024-12-06 17:37:49.123712] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:57.265 [2024-12-06 17:37:49.123720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:57.265 request: 00:21:57.265 { 00:21:57.265 "name": "TLSTEST", 00:21:57.265 "trtype": "tcp", 00:21:57.265 "traddr": "10.0.0.2", 00:21:57.265 "adrfam": "ipv4", 00:21:57.265 "trsvcid": "4420", 00:21:57.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.265 "prchk_reftag": false, 00:21:57.265 "prchk_guard": false, 00:21:57.265 "hdgst": false, 00:21:57.265 "ddgst": false, 00:21:57.265 "psk": "key0", 00:21:57.265 "allow_unrecognized_csi": false, 00:21:57.265 "method": "bdev_nvme_attach_controller", 00:21:57.265 "req_id": 1 00:21:57.265 } 00:21:57.265 Got JSON-RPC error response 00:21:57.265 response: 00:21:57.265 { 00:21:57.265 "code": -5, 00:21:57.265 "message": "Input/output error" 00:21:57.265 } 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1666571 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666571 ']' 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666571 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666571 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666571' 00:21:57.265 killing process with pid 1666571 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666571 00:21:57.265 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.265 00:21:57.265 Latency(us) 00:21:57.265 [2024-12-06T16:37:49.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.265 [2024-12-06T16:37:49.331Z] =================================================================================================================== 00:21:57.265 [2024-12-06T16:37:49.331Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666571 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zqgP2MhczP 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zqgP2MhczP 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zqgP2MhczP 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zqgP2MhczP 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666599 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666599 /var/tmp/bdevperf.sock 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666599 ']' 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.265 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.525 [2024-12-06 17:37:49.367019] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:57.525 [2024-12-06 17:37:49.367074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666599 ] 00:21:57.525 [2024-12-06 17:37:49.449789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.525 [2024-12-06 17:37:49.477333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.465 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.465 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.465 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqgP2MhczP 00:21:58.465 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:58.465 [2024-12-06 17:37:50.504856] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.465 [2024-12-06 17:37:50.514178] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:58.465 [2024-12-06 17:37:50.514197] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:58.465 [2024-12-06 17:37:50.514216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.465 [2024-12-06 17:37:50.515068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163e800 (107): Transport endpoint is not connected 00:21:58.465 [2024-12-06 17:37:50.516064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163e800 (9): Bad file descriptor 00:21:58.465 [2024-12-06 17:37:50.517066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:58.465 [2024-12-06 17:37:50.517074] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:58.465 [2024-12-06 17:37:50.517080] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:58.465 [2024-12-06 17:37:50.517088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:58.465 request: 00:21:58.465 { 00:21:58.465 "name": "TLSTEST", 00:21:58.465 "trtype": "tcp", 00:21:58.465 "traddr": "10.0.0.2", 00:21:58.465 "adrfam": "ipv4", 00:21:58.465 "trsvcid": "4420", 00:21:58.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.465 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:58.465 "prchk_reftag": false, 00:21:58.465 "prchk_guard": false, 00:21:58.465 "hdgst": false, 00:21:58.465 "ddgst": false, 00:21:58.465 "psk": "key0", 00:21:58.465 "allow_unrecognized_csi": false, 00:21:58.465 "method": "bdev_nvme_attach_controller", 00:21:58.465 "req_id": 1 00:21:58.465 } 00:21:58.465 Got JSON-RPC error response 00:21:58.465 response: 00:21:58.465 { 00:21:58.465 "code": -5, 00:21:58.465 "message": "Input/output error" 00:21:58.465 } 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1666599 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666599 ']' 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666599 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666599 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666599' 00:21:58.726 killing process with pid 1666599 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666599 00:21:58.726 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.726 00:21:58.726 Latency(us) 00:21:58.726 [2024-12-06T16:37:50.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.726 [2024-12-06T16:37:50.792Z] =================================================================================================================== 00:21:58.726 [2024-12-06T16:37:50.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666599 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqgP2MhczP 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqgP2MhczP 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zqgP2MhczP 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zqgP2MhczP 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666631 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666631 /var/tmp/bdevperf.sock 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666631 ']' 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.726 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.726 [2024-12-06 17:37:50.758062] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:21:58.726 [2024-12-06 17:37:50.758118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666631 ] 00:21:58.986 [2024-12-06 17:37:50.843742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.987 [2024-12-06 17:37:50.871767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.557 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.557 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:59.557 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zqgP2MhczP 00:21:59.816 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:00.076 [2024-12-06 17:37:51.883361] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.076 [2024-12-06 17:37:51.887825] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:00.076 [2024-12-06 17:37:51.887843] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:00.076 [2024-12-06 17:37:51.887862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:00.076 [2024-12-06 17:37:51.888499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb4800 (107): Transport endpoint is not connected 00:22:00.076 [2024-12-06 17:37:51.889494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb4800 (9): Bad file descriptor 00:22:00.076 [2024-12-06 17:37:51.890496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:00.076 [2024-12-06 17:37:51.890503] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:00.076 [2024-12-06 17:37:51.890509] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:00.076 [2024-12-06 17:37:51.890517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:00.076 request: 00:22:00.076 { 00:22:00.076 "name": "TLSTEST", 00:22:00.076 "trtype": "tcp", 00:22:00.076 "traddr": "10.0.0.2", 00:22:00.076 "adrfam": "ipv4", 00:22:00.076 "trsvcid": "4420", 00:22:00.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.076 "prchk_reftag": false, 00:22:00.076 "prchk_guard": false, 00:22:00.076 "hdgst": false, 00:22:00.076 "ddgst": false, 00:22:00.076 "psk": "key0", 00:22:00.076 "allow_unrecognized_csi": false, 00:22:00.077 "method": "bdev_nvme_attach_controller", 00:22:00.077 "req_id": 1 00:22:00.077 } 00:22:00.077 Got JSON-RPC error response 00:22:00.077 response: 00:22:00.077 { 00:22:00.077 "code": -5, 00:22:00.077 "message": "Input/output error" 00:22:00.077 } 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1666631 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666631 ']' 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666631 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666631 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666631' 00:22:00.077 killing process with pid 1666631 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666631 00:22:00.077 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.077 00:22:00.077 Latency(us) 00:22:00.077 [2024-12-06T16:37:52.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.077 [2024-12-06T16:37:52.143Z] =================================================================================================================== 00:22:00.077 [2024-12-06T16:37:52.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.077 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666631 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666659 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666659 /var/tmp/bdevperf.sock 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666659 ']' 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.077 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.077 [2024-12-06 17:37:52.132400] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:00.077 [2024-12-06 17:37:52.132455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666659 ] 00:22:00.336 [2024-12-06 17:37:52.214511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.336 [2024-12-06 17:37:52.241612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.906 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.906 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:00.906 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:01.167 [2024-12-06 17:37:53.084588] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:01.167 [2024-12-06 17:37:53.084615] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:01.167 request: 00:22:01.167 { 00:22:01.167 "name": "key0", 00:22:01.167 "path": "", 00:22:01.167 "method": "keyring_file_add_key", 00:22:01.167 "req_id": 1 00:22:01.167 } 00:22:01.167 Got JSON-RPC error response 00:22:01.167 response: 00:22:01.167 { 00:22:01.167 "code": -1, 00:22:01.167 "message": "Operation not permitted" 00:22:01.167 } 00:22:01.167 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:01.428 [2024-12-06 17:37:53.265118] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.428 [2024-12-06 17:37:53.265138] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:01.428 request: 00:22:01.428 { 00:22:01.428 "name": "TLSTEST", 00:22:01.428 "trtype": "tcp", 00:22:01.428 "traddr": "10.0.0.2", 00:22:01.428 "adrfam": "ipv4", 00:22:01.428 "trsvcid": "4420", 00:22:01.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.428 "prchk_reftag": false, 00:22:01.428 "prchk_guard": false, 00:22:01.428 "hdgst": false, 00:22:01.428 "ddgst": false, 00:22:01.428 "psk": "key0", 00:22:01.428 "allow_unrecognized_csi": false, 00:22:01.428 "method": "bdev_nvme_attach_controller", 00:22:01.428 "req_id": 1 00:22:01.428 } 00:22:01.428 Got JSON-RPC error response 00:22:01.428 response: 00:22:01.428 { 00:22:01.428 "code": -126, 00:22:01.428 "message": "Required key not available" 00:22:01.428 } 00:22:01.428 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1666659 00:22:01.428 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666659 ']' 00:22:01.428 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666659 00:22:01.428 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:01.428 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.428 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666659 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666659' 00:22:01.429 killing process with pid 1666659 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666659 00:22:01.429 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.429 00:22:01.429 Latency(us) 00:22:01.429 [2024-12-06T16:37:53.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.429 [2024-12-06T16:37:53.495Z] =================================================================================================================== 00:22:01.429 [2024-12-06T16:37:53.495Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666659 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1666181 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666181 ']' 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666181 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.429 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666181 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666181' 00:22:01.690 killing process with pid 1666181 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666181 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666181 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.SI9CzZBvy3 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.SI9CzZBvy3 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1666695 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1666695 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666695 ']' 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.690 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.690 [2024-12-06 17:37:53.740552] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:01.690 [2024-12-06 17:37:53.740607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.950 [2024-12-06 17:37:53.832839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.950 [2024-12-06 17:37:53.864448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.950 [2024-12-06 17:37:53.864481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.950 [2024-12-06 17:37:53.864486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.950 [2024-12-06 17:37:53.864491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.950 [2024-12-06 17:37:53.864495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.950 [2024-12-06 17:37:53.864947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.SI9CzZBvy3 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SI9CzZBvy3 00:22:02.520 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.783 [2024-12-06 17:37:54.735133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.783 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.045 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.045 [2024-12-06 17:37:55.071962] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.045 [2024-12-06 17:37:55.072159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.045 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.305 malloc0 00:22:03.305 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:03.576 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:03.576 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SI9CzZBvy3 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SI9CzZBvy3 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666748 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666748 /var/tmp/bdevperf.sock 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666748 ']' 00:22:03.836 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.837 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.837 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.837 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.837 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.837 [2024-12-06 17:37:55.803090] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:03.837 [2024-12-06 17:37:55.803145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666748 ] 00:22:03.837 [2024-12-06 17:37:55.886686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.097 [2024-12-06 17:37:55.915739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.669 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.669 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:04.669 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:04.929 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.929 [2024-12-06 17:37:56.887121] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.929 TLSTESTn1 00:22:04.929 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:05.189 Running I/O for 10 seconds... 00:22:07.071 5063.00 IOPS, 19.78 MiB/s [2024-12-06T16:38:00.521Z] 5649.00 IOPS, 22.07 MiB/s [2024-12-06T16:38:01.101Z] 5466.33 IOPS, 21.35 MiB/s [2024-12-06T16:38:02.480Z] 5619.50 IOPS, 21.95 MiB/s [2024-12-06T16:38:03.418Z] 5808.00 IOPS, 22.69 MiB/s [2024-12-06T16:38:04.359Z] 5856.50 IOPS, 22.88 MiB/s [2024-12-06T16:38:05.299Z] 5853.29 IOPS, 22.86 MiB/s [2024-12-06T16:38:06.241Z] 5780.62 IOPS, 22.58 MiB/s [2024-12-06T16:38:07.256Z] 5772.11 IOPS, 22.55 MiB/s [2024-12-06T16:38:07.256Z] 5801.00 IOPS, 22.66 MiB/s 00:22:15.190 Latency(us) 00:22:15.190 [2024-12-06T16:38:07.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.190 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:15.190 Verification LBA range: start 0x0 length 0x2000 00:22:15.190 TLSTESTn1 : 10.01 5805.84 22.68 0.00 0.00 22016.53 5597.87 33423.36 00:22:15.190 [2024-12-06T16:38:07.256Z] =================================================================================================================== 00:22:15.190 [2024-12-06T16:38:07.256Z] Total : 5805.84 22.68 0.00 0.00 22016.53 5597.87 33423.36 00:22:15.190 { 00:22:15.190 "results": [ 00:22:15.190 { 00:22:15.190 "job": "TLSTESTn1", 00:22:15.190 "core_mask": "0x4", 00:22:15.190 "workload": "verify", 00:22:15.190 "status": "finished", 00:22:15.190 "verify_range": { 00:22:15.190 "start": 0, 00:22:15.190 "length": 8192 00:22:15.190 }, 00:22:15.190 "queue_depth": 128, 00:22:15.190 "io_size": 4096, 00:22:15.190 "runtime": 10.013542, 00:22:15.190 "iops": 5805.837734539886, 00:22:15.190 "mibps": 22.67905365054643, 00:22:15.190 "io_failed": 0, 00:22:15.190 "io_timeout": 0, 00:22:15.190 "avg_latency_us": 22016.532047175926, 00:22:15.190 "min_latency_us": 5597.866666666667, 00:22:15.190 "max_latency_us": 33423.36 00:22:15.190 } 00:22:15.190 ], 00:22:15.190 "core_count": 1 00:22:15.190 } 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1666748 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666748 ']' 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666748 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666748 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666748' 00:22:15.190 killing process with pid 1666748 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666748 00:22:15.190 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.190 00:22:15.190 Latency(us) 00:22:15.190 [2024-12-06T16:38:07.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.190 [2024-12-06T16:38:07.256Z] =================================================================================================================== 00:22:15.190 [2024-12-06T16:38:07.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.190 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666748 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.SI9CzZBvy3 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SI9CzZBvy3 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SI9CzZBvy3 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SI9CzZBvy3 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SI9CzZBvy3 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666905 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666905 /var/tmp/bdevperf.sock 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666905 ']' 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.486 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.486 [2024-12-06 17:38:07.357678] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:15.486 [2024-12-06 17:38:07.357735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666905 ] 00:22:15.486 [2024-12-06 17:38:07.440121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.487 [2024-12-06 17:38:07.468904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.425 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.425 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:16.425 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:16.425 [2024-12-06 17:38:08.312220] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SI9CzZBvy3': 0100666 00:22:16.425 [2024-12-06 17:38:08.312240] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:16.425 request: 00:22:16.425 { 00:22:16.425 "name": "key0", 00:22:16.425 "path": "/tmp/tmp.SI9CzZBvy3", 00:22:16.425 "method": "keyring_file_add_key", 00:22:16.425 "req_id": 1 00:22:16.425 } 00:22:16.425 Got JSON-RPC error response 00:22:16.425 response: 00:22:16.425 { 00:22:16.425 "code": -1, 00:22:16.425 "message": "Operation not permitted" 00:22:16.425 } 00:22:16.425 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:16.685 [2024-12-06 17:38:08.492743] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.685 [2024-12-06 17:38:08.492767] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:16.685 request: 00:22:16.685 { 00:22:16.685 "name": "TLSTEST", 00:22:16.685 "trtype": "tcp", 00:22:16.685 "traddr": "10.0.0.2", 00:22:16.685 "adrfam": "ipv4", 00:22:16.685 "trsvcid": "4420", 00:22:16.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.685 "prchk_reftag": false, 00:22:16.685 "prchk_guard": false, 00:22:16.685 "hdgst": false, 00:22:16.685 "ddgst": false, 00:22:16.685 "psk": "key0", 00:22:16.685 "allow_unrecognized_csi": false, 00:22:16.685 "method": "bdev_nvme_attach_controller", 00:22:16.685 "req_id": 1 00:22:16.685 } 00:22:16.685 Got JSON-RPC error response 00:22:16.685 response: 00:22:16.685 { 00:22:16.685 "code": -126, 00:22:16.685 "message": "Required key not available" 00:22:16.685 } 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1666905 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666905 ']' 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666905 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666905 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666905' 00:22:16.685 killing process with pid 1666905 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666905 00:22:16.685 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.685 00:22:16.685 Latency(us) 00:22:16.685 [2024-12-06T16:38:08.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.685 [2024-12-06T16:38:08.751Z] =================================================================================================================== 00:22:16.685 [2024-12-06T16:38:08.751Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666905 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1666695 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666695 ']' 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666695 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.685 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666695 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666695' 00:22:16.945 killing process with pid 1666695 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666695 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666695 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1666938 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1666938 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1666938 ']' 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.945 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.945 [2024-12-06 17:38:08.915149] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:16.945 [2024-12-06 17:38:08.915201] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.945 [2024-12-06 17:38:09.005987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.204 [2024-12-06 17:38:09.035549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.204 [2024-12-06 17:38:09.035582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.204 [2024-12-06 17:38:09.035589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.204 [2024-12-06 17:38:09.035593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.204 [2024-12-06 17:38:09.035597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.204 [2024-12-06 17:38:09.036041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.SI9CzZBvy3 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SI9CzZBvy3 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.SI9CzZBvy3 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SI9CzZBvy3 00:22:17.774 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.033 [2024-12-06 17:38:09.912895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.033 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.033 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:18.292 [2024-12-06 17:38:10.249728] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.292 [2024-12-06 17:38:10.249953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.292 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:18.552 malloc0 00:22:18.552 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.812 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:18.812 [2024-12-06 17:38:10.768902] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SI9CzZBvy3': 0100666 00:22:18.812 [2024-12-06 17:38:10.768925] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:18.812 request: 00:22:18.812 { 00:22:18.812 "name": "key0", 00:22:18.812 "path": "/tmp/tmp.SI9CzZBvy3", 00:22:18.812 "method": "keyring_file_add_key", 00:22:18.812 "req_id": 1 00:22:18.812 } 00:22:18.812 Got JSON-RPC error response 00:22:18.813 response: 00:22:18.813 { 00:22:18.813 "code": -1, 00:22:18.813 "message": "Operation not permitted" 00:22:18.813 } 00:22:18.813 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:19.072 [2024-12-06 17:38:10.937338] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:19.073 [2024-12-06 17:38:10.937367] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:19.073 request: 00:22:19.073 { 00:22:19.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.073 "host": "nqn.2016-06.io.spdk:host1", 00:22:19.073 "psk": "key0", 00:22:19.073 "method": "nvmf_subsystem_add_host", 00:22:19.073 "req_id": 1 00:22:19.073 } 00:22:19.073 Got JSON-RPC error response 00:22:19.073 response: 00:22:19.073 { 00:22:19.073 "code": -32603, 00:22:19.073 "message": "Internal error" 00:22:19.073 } 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1666938 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1666938 ']' 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1666938 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.073 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666938 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666938' 00:22:19.073 killing process with pid 1666938 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1666938 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1666938 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.SI9CzZBvy3 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.073 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1667001 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1667001 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667001 ']' 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.332 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.332 [2024-12-06 17:38:11.209270] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:19.332 [2024-12-06 17:38:11.209332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.332 [2024-12-06 17:38:11.297275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.332 [2024-12-06 17:38:11.327177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.332 [2024-12-06 17:38:11.327207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.332 [2024-12-06 17:38:11.327213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.332 [2024-12-06 17:38:11.327218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.332 [2024-12-06 17:38:11.327223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.332 [2024-12-06 17:38:11.327681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.270 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.270 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:20.270 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.270 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.270 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.270 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.270 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.SI9CzZBvy3 00:22:20.270 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SI9CzZBvy3 00:22:20.270 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.270 [2024-12-06 17:38:12.172552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.270 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:20.529 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:20.529 [2024-12-06 17:38:12.493331] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:20.529 [2024-12-06 17:38:12.493523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.529 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:20.789 malloc0 00:22:20.789 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:20.789 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:21.048 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1667053 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1667053 /var/tmp/bdevperf.sock 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667053 ']' 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.308 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.308 [2024-12-06 17:38:13.198074] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:21.308 [2024-12-06 17:38:13.198128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667053 ] 00:22:21.308 [2024-12-06 17:38:13.278825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.308 [2024-12-06 17:38:13.307866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.245 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.245 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:22.245 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:22.245 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:22.245 [2024-12-06 17:38:14.287397] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.505 TLSTESTn1 00:22:22.505 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:22.765 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:22.765 "subsystems": [ 00:22:22.765 { 00:22:22.765 "subsystem": "keyring", 00:22:22.765 "config": [ 00:22:22.765 { 00:22:22.765 "method": "keyring_file_add_key", 00:22:22.765 "params": { 00:22:22.765 "name": "key0", 00:22:22.765 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:22.765 } 00:22:22.765 } 00:22:22.765 ] 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "subsystem": "iobuf", 00:22:22.765 "config": [ 00:22:22.765 { 00:22:22.765 "method": "iobuf_set_options", 00:22:22.765 "params": { 00:22:22.765 "small_pool_count": 8192, 00:22:22.765 "large_pool_count": 1024, 00:22:22.765 "small_bufsize": 8192, 00:22:22.765 "large_bufsize": 135168, 00:22:22.765 "enable_numa": false 00:22:22.765 } 00:22:22.765 } 00:22:22.765 ] 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "subsystem": "sock", 00:22:22.765 "config": [ 00:22:22.765 { 00:22:22.765 "method": "sock_set_default_impl", 00:22:22.765 "params": { 00:22:22.765 "impl_name": "posix" 00:22:22.765 } 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "method": "sock_impl_set_options", 00:22:22.765 "params": { 00:22:22.765 "impl_name": "ssl", 00:22:22.765 "recv_buf_size": 4096, 00:22:22.765 "send_buf_size": 4096, 00:22:22.765 "enable_recv_pipe": true, 00:22:22.765 "enable_quickack": false, 00:22:22.765 "enable_placement_id": 0, 00:22:22.765 "enable_zerocopy_send_server": true, 00:22:22.765 "enable_zerocopy_send_client": false, 00:22:22.765 "zerocopy_threshold": 0, 00:22:22.765 "tls_version": 0, 00:22:22.765 "enable_ktls": false 00:22:22.765 } 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "method": "sock_impl_set_options", 00:22:22.765 "params": { 00:22:22.765 "impl_name": "posix", 00:22:22.765 "recv_buf_size": 2097152, 00:22:22.765 "send_buf_size": 2097152, 00:22:22.765 "enable_recv_pipe": true, 00:22:22.765 "enable_quickack": false, 00:22:22.765 "enable_placement_id": 0, 00:22:22.765 "enable_zerocopy_send_server": true, 00:22:22.765 "enable_zerocopy_send_client": false, 00:22:22.765 "zerocopy_threshold": 0, 00:22:22.765 "tls_version": 0, 00:22:22.765 "enable_ktls": false 00:22:22.765 } 00:22:22.765 } 00:22:22.765 ] 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "subsystem": "vmd", 00:22:22.765 "config": [] 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "subsystem": "accel", 00:22:22.765 "config": [ 00:22:22.765 { 00:22:22.765 "method": "accel_set_options", 00:22:22.765 "params": { 00:22:22.765 "small_cache_size": 128, 00:22:22.765 "large_cache_size": 16, 00:22:22.765 "task_count": 2048, 00:22:22.765 "sequence_count": 2048, 00:22:22.765 "buf_count": 2048 00:22:22.765 } 00:22:22.765 } 00:22:22.765 ] 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "subsystem": "bdev", 00:22:22.765 "config": [ 00:22:22.765 { 00:22:22.765 "method": "bdev_set_options", 00:22:22.765 "params": { 00:22:22.765 "bdev_io_pool_size": 65535, 00:22:22.765 "bdev_io_cache_size": 256, 00:22:22.765 "bdev_auto_examine": true, 00:22:22.765 "iobuf_small_cache_size": 128, 00:22:22.765 "iobuf_large_cache_size": 16 00:22:22.765 } 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "method": "bdev_raid_set_options", 00:22:22.765 "params": { 00:22:22.765 "process_window_size_kb": 1024, 00:22:22.765 "process_max_bandwidth_mb_sec": 0 00:22:22.765 } 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "method": "bdev_iscsi_set_options", 00:22:22.765 "params": { 00:22:22.765 "timeout_sec": 30 00:22:22.765 } 00:22:22.765 }, 00:22:22.765 { 00:22:22.765 "method": "bdev_nvme_set_options", 00:22:22.765 "params": { 00:22:22.765 "action_on_timeout": "none", 00:22:22.765 "timeout_us": 0, 00:22:22.765 "timeout_admin_us": 0, 00:22:22.765 "keep_alive_timeout_ms": 10000, 00:22:22.765 "arbitration_burst": 0, 00:22:22.765 "low_priority_weight": 0, 00:22:22.765 "medium_priority_weight": 0, 00:22:22.765 "high_priority_weight": 0, 00:22:22.765 "nvme_adminq_poll_period_us": 10000, 00:22:22.765 "nvme_ioq_poll_period_us": 0, 00:22:22.765 "io_queue_requests": 0, 00:22:22.765 "delay_cmd_submit": true, 00:22:22.765 "transport_retry_count": 4, 00:22:22.765 "bdev_retry_count": 3, 00:22:22.766 "transport_ack_timeout": 0, 00:22:22.766 "ctrlr_loss_timeout_sec": 0, 00:22:22.766 "reconnect_delay_sec": 0, 00:22:22.766 "fast_io_fail_timeout_sec": 0, 00:22:22.766 "disable_auto_failback": false, 00:22:22.766 "generate_uuids": false, 00:22:22.766 "transport_tos": 0, 00:22:22.766 "nvme_error_stat": false, 00:22:22.766 "rdma_srq_size": 0, 00:22:22.766 "io_path_stat": false, 00:22:22.766 "allow_accel_sequence": false, 00:22:22.766 "rdma_max_cq_size": 0, 00:22:22.766 "rdma_cm_event_timeout_ms": 0, 00:22:22.766 "dhchap_digests": [ 00:22:22.766 "sha256", 00:22:22.766 "sha384", 00:22:22.766 "sha512" 00:22:22.766 ], 00:22:22.766 "dhchap_dhgroups": [ 00:22:22.766 "null", 00:22:22.766 "ffdhe2048", 00:22:22.766 "ffdhe3072", 00:22:22.766 "ffdhe4096", 00:22:22.766 "ffdhe6144", 00:22:22.766 "ffdhe8192" 00:22:22.766 ] 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "bdev_nvme_set_hotplug", 00:22:22.766 "params": { 00:22:22.766 "period_us": 100000, 00:22:22.766 "enable": false 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "bdev_malloc_create", 00:22:22.766 "params": { 00:22:22.766 "name": "malloc0", 00:22:22.766 "num_blocks": 8192, 00:22:22.766 "block_size": 4096, 00:22:22.766 "physical_block_size": 4096, 00:22:22.766 "uuid": "b5aad8db-9697-44c8-b0fb-b99d54cb105a", 00:22:22.766 "optimal_io_boundary": 0, 00:22:22.766 "md_size": 0, 00:22:22.766 "dif_type": 0, 00:22:22.766 "dif_is_head_of_md": false, 00:22:22.766 "dif_pi_format": 0 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "bdev_wait_for_examine" 00:22:22.766 } 00:22:22.766 ] 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "subsystem": "nbd", 00:22:22.766 "config": [] 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "subsystem": "scheduler", 00:22:22.766 "config": [ 00:22:22.766 { 00:22:22.766 "method": "framework_set_scheduler", 00:22:22.766 "params": { 00:22:22.766 "name": "static" 00:22:22.766 } 00:22:22.766 } 00:22:22.766 ] 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "subsystem": "nvmf", 00:22:22.766 "config": [ 00:22:22.766 { 00:22:22.766 "method": "nvmf_set_config", 00:22:22.766 "params": { 00:22:22.766 "discovery_filter": "match_any", 00:22:22.766 "admin_cmd_passthru": { 00:22:22.766 "identify_ctrlr": false 00:22:22.766 }, 00:22:22.766 "dhchap_digests": [ 00:22:22.766 "sha256", 00:22:22.766 "sha384", 00:22:22.766 "sha512" 00:22:22.766 ], 00:22:22.766 "dhchap_dhgroups": [ 00:22:22.766 "null", 00:22:22.766 "ffdhe2048", 00:22:22.766 "ffdhe3072", 00:22:22.766 "ffdhe4096", 00:22:22.766 "ffdhe6144", 00:22:22.766 "ffdhe8192" 00:22:22.766 ] 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_set_max_subsystems", 00:22:22.766 "params": { 00:22:22.766 "max_subsystems": 1024 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_set_crdt", 00:22:22.766 "params": { 00:22:22.766 "crdt1": 0, 00:22:22.766 "crdt2": 0, 00:22:22.766 "crdt3": 0 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_create_transport", 00:22:22.766 "params": { 00:22:22.766 "trtype": "TCP", 00:22:22.766 "max_queue_depth": 128, 00:22:22.766 "max_io_qpairs_per_ctrlr": 127, 00:22:22.766 "in_capsule_data_size": 4096, 00:22:22.766 "max_io_size": 131072, 00:22:22.766 "io_unit_size": 131072, 00:22:22.766 "max_aq_depth": 128, 00:22:22.766 "num_shared_buffers": 511, 00:22:22.766 "buf_cache_size": 4294967295, 00:22:22.766 "dif_insert_or_strip": false, 00:22:22.766 "zcopy": false, 00:22:22.766 "c2h_success": false, 00:22:22.766 "sock_priority": 0, 00:22:22.766 "abort_timeout_sec": 1, 00:22:22.766 "ack_timeout": 0, 00:22:22.766 "data_wr_pool_size": 0 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_create_subsystem", 00:22:22.766 "params": { 00:22:22.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.766 "allow_any_host": false, 00:22:22.766 "serial_number": "SPDK00000000000001", 00:22:22.766 "model_number": "SPDK bdev Controller", 00:22:22.766 "max_namespaces": 10, 00:22:22.766 "min_cntlid": 1, 00:22:22.766 "max_cntlid": 65519, 00:22:22.766 "ana_reporting": false 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_subsystem_add_host", 00:22:22.766 "params": { 00:22:22.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.766 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.766 "psk": "key0" 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_subsystem_add_ns", 00:22:22.766 "params": { 00:22:22.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.766 "namespace": { 00:22:22.766 "nsid": 1, 00:22:22.766 "bdev_name": "malloc0", 00:22:22.766 "nguid": "B5AAD8DB969744C8B0FBB99D54CB105A", 00:22:22.766 "uuid": "b5aad8db-9697-44c8-b0fb-b99d54cb105a", 00:22:22.766 "no_auto_visible": false 00:22:22.766 } 00:22:22.766 } 00:22:22.766 }, 00:22:22.766 { 00:22:22.766 "method": "nvmf_subsystem_add_listener", 00:22:22.766 "params": { 00:22:22.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.766 "listen_address": { 00:22:22.766 "trtype": "TCP", 00:22:22.766 "adrfam": "IPv4", 00:22:22.766 "traddr": "10.0.0.2", 00:22:22.766 "trsvcid": "4420" 00:22:22.766 }, 00:22:22.766 "secure_channel": true 00:22:22.766 } 00:22:22.766 } 00:22:22.766 ] 00:22:22.766 } 00:22:22.766 ] 00:22:22.766 }' 00:22:22.766 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:23.026 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:23.026 "subsystems": [ 00:22:23.026 { 00:22:23.026 "subsystem": "keyring", 00:22:23.026 "config": [ 00:22:23.026 { 00:22:23.026 "method": "keyring_file_add_key", 00:22:23.026 "params": { 00:22:23.026 "name": "key0", 00:22:23.026 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:23.026 } 00:22:23.026 } 00:22:23.026 ] 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "subsystem": "iobuf", 00:22:23.026 "config": [ 00:22:23.026 { 00:22:23.026 "method": "iobuf_set_options", 00:22:23.026 "params": { 00:22:23.026 "small_pool_count": 8192, 00:22:23.026 "large_pool_count": 1024, 00:22:23.026 "small_bufsize": 8192, 00:22:23.026 "large_bufsize": 135168, 00:22:23.026 "enable_numa": false 00:22:23.026 } 00:22:23.026 } 00:22:23.026 ] 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "subsystem": "sock", 00:22:23.026 "config": [ 00:22:23.026 { 00:22:23.026 "method": "sock_set_default_impl", 00:22:23.026 "params": { 00:22:23.026 "impl_name": "posix" 00:22:23.026 } 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "method": "sock_impl_set_options", 00:22:23.026 "params": { 00:22:23.026 "impl_name": "ssl", 00:22:23.026 "recv_buf_size": 4096, 00:22:23.026 "send_buf_size": 4096, 00:22:23.026 "enable_recv_pipe": true, 00:22:23.026 "enable_quickack": false, 00:22:23.026 "enable_placement_id": 0, 00:22:23.026 "enable_zerocopy_send_server": true, 00:22:23.026 "enable_zerocopy_send_client": false, 00:22:23.026 "zerocopy_threshold": 0, 00:22:23.026 "tls_version": 0, 00:22:23.026 "enable_ktls": false 00:22:23.026 } 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "method": "sock_impl_set_options", 00:22:23.026 "params": { 00:22:23.026 "impl_name": "posix", 00:22:23.026 "recv_buf_size": 2097152, 00:22:23.026 "send_buf_size": 2097152, 00:22:23.026 "enable_recv_pipe": true, 00:22:23.026 "enable_quickack": false, 00:22:23.026 "enable_placement_id": 0, 00:22:23.026 "enable_zerocopy_send_server": true, 00:22:23.026 "enable_zerocopy_send_client": false, 00:22:23.026 "zerocopy_threshold": 0, 00:22:23.026 "tls_version": 0, 00:22:23.026 "enable_ktls": false 00:22:23.026 } 00:22:23.026 } 00:22:23.026 ] 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "subsystem": "vmd", 00:22:23.026 "config": [] 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "subsystem": "accel", 00:22:23.026 "config": [ 00:22:23.026 { 00:22:23.026 "method": "accel_set_options", 00:22:23.026 "params": { 00:22:23.026 "small_cache_size": 128, 00:22:23.026 "large_cache_size": 16, 00:22:23.026 "task_count": 2048, 00:22:23.026 "sequence_count": 2048, 00:22:23.026 "buf_count": 2048 00:22:23.026 } 00:22:23.026 } 00:22:23.026 ] 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "subsystem": "bdev", 00:22:23.026 "config": [ 00:22:23.026 { 00:22:23.026 "method": "bdev_set_options", 00:22:23.026 "params": { 00:22:23.026 "bdev_io_pool_size": 65535, 00:22:23.026 "bdev_io_cache_size": 256, 00:22:23.026 "bdev_auto_examine": true, 00:22:23.026 "iobuf_small_cache_size": 128, 00:22:23.026 "iobuf_large_cache_size": 16 00:22:23.026 } 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "method": "bdev_raid_set_options", 00:22:23.026 "params": { 00:22:23.026 "process_window_size_kb": 1024, 00:22:23.026 "process_max_bandwidth_mb_sec": 0 00:22:23.026 } 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "method": "bdev_iscsi_set_options", 00:22:23.026 "params": { 00:22:23.026 "timeout_sec": 30 00:22:23.026 } 00:22:23.026 }, 00:22:23.026 { 00:22:23.026 "method": "bdev_nvme_set_options", 00:22:23.026 "params": { 00:22:23.026 "action_on_timeout": "none", 00:22:23.026 "timeout_us": 0, 00:22:23.026 "timeout_admin_us": 0, 00:22:23.026 "keep_alive_timeout_ms": 10000, 00:22:23.026 "arbitration_burst": 0, 00:22:23.026 "low_priority_weight": 0, 00:22:23.026 "medium_priority_weight": 0, 00:22:23.026 "high_priority_weight": 0, 00:22:23.026 "nvme_adminq_poll_period_us": 10000, 00:22:23.026 "nvme_ioq_poll_period_us": 0, 00:22:23.026 "io_queue_requests": 512, 00:22:23.026 "delay_cmd_submit": true, 00:22:23.026 "transport_retry_count": 4, 00:22:23.026 "bdev_retry_count": 3, 00:22:23.026 "transport_ack_timeout": 0, 00:22:23.026 "ctrlr_loss_timeout_sec": 0, 00:22:23.026 "reconnect_delay_sec": 0, 00:22:23.026 "fast_io_fail_timeout_sec": 0, 00:22:23.026 "disable_auto_failback": false, 00:22:23.026 "generate_uuids": false, 00:22:23.026 "transport_tos": 0, 00:22:23.026 "nvme_error_stat": false, 00:22:23.026 "rdma_srq_size": 0, 00:22:23.026 "io_path_stat": false, 00:22:23.026 "allow_accel_sequence": false, 00:22:23.026 "rdma_max_cq_size": 0, 00:22:23.027 "rdma_cm_event_timeout_ms": 0, 00:22:23.027 "dhchap_digests": [ 00:22:23.027 "sha256", 00:22:23.027 "sha384", 00:22:23.027 "sha512" 00:22:23.027 ], 00:22:23.027 "dhchap_dhgroups": [ 00:22:23.027 "null", 00:22:23.027 "ffdhe2048", 00:22:23.027 "ffdhe3072", 00:22:23.027 "ffdhe4096", 00:22:23.027 "ffdhe6144", 00:22:23.027 "ffdhe8192" 00:22:23.027 ] 00:22:23.027 } 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "method": "bdev_nvme_attach_controller", 00:22:23.027 "params": { 00:22:23.027 "name": "TLSTEST", 00:22:23.027 "trtype": "TCP", 00:22:23.027 "adrfam": "IPv4", 00:22:23.027 "traddr": "10.0.0.2", 00:22:23.027 "trsvcid": "4420", 00:22:23.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.027 "prchk_reftag": false, 00:22:23.027 "prchk_guard": false, 00:22:23.027 "ctrlr_loss_timeout_sec": 0, 00:22:23.027 "reconnect_delay_sec": 0, 00:22:23.027 "fast_io_fail_timeout_sec": 0, 00:22:23.027 "psk": "key0", 00:22:23.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.027 "hdgst": false, 00:22:23.027 "ddgst": false, 00:22:23.027 "multipath": "multipath" 00:22:23.027 } 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "method": "bdev_nvme_set_hotplug", 00:22:23.027 "params": { 00:22:23.027 "period_us": 100000, 00:22:23.027 "enable": false 00:22:23.027 } 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "method": "bdev_wait_for_examine" 00:22:23.027 } 00:22:23.027 ] 00:22:23.027 }, 00:22:23.027 { 00:22:23.027 "subsystem": "nbd", 00:22:23.027 "config": [] 00:22:23.027 } 00:22:23.027 ] 00:22:23.027 }' 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1667053 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667053 ']' 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667053 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667053 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667053' 00:22:23.027 killing process with pid 1667053 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667053 00:22:23.027 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.027 00:22:23.027 Latency(us) 00:22:23.027 [2024-12-06T16:38:15.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.027 [2024-12-06T16:38:15.093Z] =================================================================================================================== 00:22:23.027 [2024-12-06T16:38:15.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.027 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667053 00:22:23.027 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1667001 00:22:23.027 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667001 ']' 00:22:23.027 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667001 00:22:23.027 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:23.027 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.027 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667001 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667001' 00:22:23.287 killing process with pid 1667001 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667001 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667001 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.287 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:23.287 "subsystems": [ 00:22:23.287 { 00:22:23.287 "subsystem": "keyring", 00:22:23.287 "config": [ 00:22:23.287 { 00:22:23.287 "method": "keyring_file_add_key", 00:22:23.287 "params": { 00:22:23.287 "name": "key0", 00:22:23.287 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:23.287 } 00:22:23.287 } 00:22:23.287 ] 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "subsystem": "iobuf", 00:22:23.287 "config": [ 00:22:23.287 { 00:22:23.287 "method": "iobuf_set_options", 00:22:23.287 "params": { 00:22:23.287 "small_pool_count": 8192, 00:22:23.287 "large_pool_count": 1024, 00:22:23.287 "small_bufsize": 8192, 00:22:23.287 "large_bufsize": 135168, 00:22:23.287 "enable_numa": false 00:22:23.287 } 00:22:23.287 } 00:22:23.287 ] 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "subsystem": "sock", 00:22:23.287 "config": [ 00:22:23.287 { 00:22:23.287 "method": "sock_set_default_impl", 00:22:23.287 "params": { 00:22:23.287 "impl_name": "posix" 00:22:23.287 } 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "method": "sock_impl_set_options", 00:22:23.287 "params": { 00:22:23.287 "impl_name": "ssl", 00:22:23.287 "recv_buf_size": 4096, 00:22:23.287 "send_buf_size": 4096, 00:22:23.287 "enable_recv_pipe": true, 00:22:23.287 "enable_quickack": false, 00:22:23.287 "enable_placement_id": 0, 00:22:23.287 "enable_zerocopy_send_server": true, 00:22:23.287 "enable_zerocopy_send_client": false, 00:22:23.287 "zerocopy_threshold": 0, 00:22:23.287 "tls_version": 0, 00:22:23.287 "enable_ktls": false 00:22:23.287 } 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "method": "sock_impl_set_options", 00:22:23.287 "params": { 00:22:23.287 "impl_name": "posix", 00:22:23.287 "recv_buf_size": 2097152, 00:22:23.287 "send_buf_size": 2097152, 00:22:23.287 "enable_recv_pipe": true, 00:22:23.287 "enable_quickack": false, 00:22:23.287 "enable_placement_id": 0, 00:22:23.287 "enable_zerocopy_send_server": true, 00:22:23.287 "enable_zerocopy_send_client": false, 00:22:23.287 "zerocopy_threshold": 0, 00:22:23.287 "tls_version": 0, 00:22:23.287 "enable_ktls": false 00:22:23.287 } 00:22:23.287 } 00:22:23.287 ] 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "subsystem": "vmd", 00:22:23.287 "config": [] 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "subsystem": "accel", 00:22:23.287 "config": [ 00:22:23.287 { 00:22:23.287 "method": "accel_set_options", 00:22:23.287 "params": { 00:22:23.287 "small_cache_size": 128, 00:22:23.287 "large_cache_size": 16, 00:22:23.287 "task_count": 2048, 00:22:23.287 "sequence_count": 2048, 00:22:23.287 "buf_count": 2048 00:22:23.287 } 00:22:23.287 } 00:22:23.287 ] 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "subsystem": "bdev", 00:22:23.287 "config": [ 00:22:23.287 { 00:22:23.287 "method": "bdev_set_options", 00:22:23.287 "params": { 00:22:23.287 "bdev_io_pool_size": 65535, 00:22:23.287 "bdev_io_cache_size": 256, 00:22:23.287 "bdev_auto_examine": true, 00:22:23.287 "iobuf_small_cache_size": 128, 00:22:23.287 "iobuf_large_cache_size": 16 00:22:23.287 } 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "method": "bdev_raid_set_options", 00:22:23.287 "params": { 00:22:23.287 "process_window_size_kb": 1024, 00:22:23.287 "process_max_bandwidth_mb_sec": 0 00:22:23.287 } 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "method": "bdev_iscsi_set_options", 00:22:23.287 "params": { 00:22:23.287 "timeout_sec": 30 00:22:23.287 } 00:22:23.287 }, 00:22:23.287 { 00:22:23.287 "method": "bdev_nvme_set_options", 00:22:23.287 "params": { 00:22:23.287 "action_on_timeout": "none", 00:22:23.287 "timeout_us": 0, 00:22:23.287 "timeout_admin_us": 0, 00:22:23.287 "keep_alive_timeout_ms": 10000, 00:22:23.287 "arbitration_burst": 0, 00:22:23.287 "low_priority_weight": 0, 00:22:23.287 "medium_priority_weight": 0, 00:22:23.287 "high_priority_weight": 0, 00:22:23.287 "nvme_adminq_poll_period_us": 10000, 00:22:23.287 "nvme_ioq_poll_period_us": 0, 00:22:23.287 "io_queue_requests": 0, 00:22:23.287 "delay_cmd_submit": true, 00:22:23.287 "transport_retry_count": 4, 00:22:23.287 "bdev_retry_count": 3, 00:22:23.287 "transport_ack_timeout": 0, 00:22:23.287 "ctrlr_loss_timeout_sec": 0, 00:22:23.287 "reconnect_delay_sec": 0, 00:22:23.287 "fast_io_fail_timeout_sec": 0, 00:22:23.287 "disable_auto_failback": false, 00:22:23.287 "generate_uuids": false, 00:22:23.287 "transport_tos": 0, 00:22:23.287 "nvme_error_stat": false, 00:22:23.287 "rdma_srq_size": 0, 00:22:23.287 "io_path_stat": false, 00:22:23.287 "allow_accel_sequence": false, 00:22:23.287 "rdma_max_cq_size": 0, 00:22:23.287 "rdma_cm_event_timeout_ms": 0, 00:22:23.287 "dhchap_digests": [ 00:22:23.287 "sha256", 00:22:23.287 "sha384", 00:22:23.287 "sha512" 00:22:23.287 ], 00:22:23.287 "dhchap_dhgroups": [ 00:22:23.287 "null", 00:22:23.287 "ffdhe2048", 00:22:23.288 "ffdhe3072", 00:22:23.288 "ffdhe4096", 00:22:23.288 "ffdhe6144", 00:22:23.288 "ffdhe8192" 00:22:23.288 ] 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "bdev_nvme_set_hotplug", 00:22:23.288 "params": { 00:22:23.288 "period_us": 100000, 00:22:23.288 "enable": false 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "bdev_malloc_create", 00:22:23.288 "params": { 00:22:23.288 "name": "malloc0", 00:22:23.288 "num_blocks": 8192, 00:22:23.288 "block_size": 4096, 00:22:23.288 "physical_block_size": 4096, 00:22:23.288 "uuid": "b5aad8db-9697-44c8-b0fb-b99d54cb105a", 00:22:23.288 "optimal_io_boundary": 0, 00:22:23.288 "md_size": 0, 00:22:23.288 "dif_type": 0, 00:22:23.288 "dif_is_head_of_md": false, 00:22:23.288 "dif_pi_format": 0 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "bdev_wait_for_examine" 00:22:23.288 } 00:22:23.288 ] 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "subsystem": "nbd", 00:22:23.288 "config": [] 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "subsystem": "scheduler", 00:22:23.288 "config": [ 00:22:23.288 { 00:22:23.288 "method": "framework_set_scheduler", 00:22:23.288 "params": { 00:22:23.288 "name": "static" 00:22:23.288 } 00:22:23.288 } 00:22:23.288 ] 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "subsystem": "nvmf", 00:22:23.288 "config": [ 00:22:23.288 { 00:22:23.288 "method": "nvmf_set_config", 00:22:23.288 "params": { 00:22:23.288 "discovery_filter": "match_any", 00:22:23.288 "admin_cmd_passthru": { 00:22:23.288 "identify_ctrlr": false 00:22:23.288 }, 00:22:23.288 "dhchap_digests": [ 00:22:23.288 "sha256", 00:22:23.288 "sha384", 00:22:23.288 "sha512" 00:22:23.288 ], 00:22:23.288 "dhchap_dhgroups": [ 00:22:23.288 "null", 00:22:23.288 "ffdhe2048", 00:22:23.288 "ffdhe3072", 00:22:23.288 "ffdhe4096", 00:22:23.288 "ffdhe6144", 00:22:23.288 "ffdhe8192" 00:22:23.288 ] 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_set_max_subsystems", 00:22:23.288 "params": { 00:22:23.288 "max_subsystems": 1024 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_set_crdt", 00:22:23.288 "params": { 00:22:23.288 "crdt1": 0, 00:22:23.288 "crdt2": 0, 00:22:23.288 "crdt3": 0 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_create_transport", 00:22:23.288 "params": { 00:22:23.288 "trtype": "TCP", 00:22:23.288 "max_queue_depth": 128, 00:22:23.288 "max_io_qpairs_per_ctrlr": 127, 00:22:23.288 "in_capsule_data_size": 4096, 00:22:23.288 "max_io_size": 131072, 00:22:23.288 "io_unit_size": 131072, 00:22:23.288 "max_aq_depth": 128, 00:22:23.288 "num_shared_buffers": 511, 00:22:23.288 "buf_cache_size": 4294967295, 00:22:23.288 "dif_insert_or_strip": false, 00:22:23.288 "zcopy": false, 00:22:23.288 "c2h_success": false, 00:22:23.288 "sock_priority": 0, 00:22:23.288 "abort_timeout_sec": 1, 00:22:23.288 "ack_timeout": 0, 00:22:23.288 "data_wr_pool_size": 0 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_create_subsystem", 00:22:23.288 "params": { 00:22:23.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.288 "allow_any_host": false, 00:22:23.288 "serial_number": "SPDK00000000000001", 00:22:23.288 "model_number": "SPDK bdev Controller", 00:22:23.288 "max_namespaces": 10, 00:22:23.288 "min_cntlid": 1, 00:22:23.288 "max_cntlid": 65519, 00:22:23.288 "ana_reporting": false 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_subsystem_add_host", 00:22:23.288 "params": { 00:22:23.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.288 "host": "nqn.2016-06.io.spdk:host1", 00:22:23.288 "psk": "key0" 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_subsystem_add_ns", 00:22:23.288 "params": { 00:22:23.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.288 "namespace": { 00:22:23.288 "nsid": 1, 00:22:23.288 "bdev_name": "malloc0", 00:22:23.288 "nguid": "B5AAD8DB969744C8B0FBB99D54CB105A", 00:22:23.288 "uuid": "b5aad8db-9697-44c8-b0fb-b99d54cb105a", 00:22:23.288 "no_auto_visible": false 00:22:23.288 } 00:22:23.288 } 00:22:23.288 }, 00:22:23.288 { 00:22:23.288 "method": "nvmf_subsystem_add_listener", 00:22:23.288 "params": { 00:22:23.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.288 "listen_address": { 00:22:23.288 "trtype": "TCP", 00:22:23.288 "adrfam": "IPv4", 00:22:23.288 "traddr": "10.0.0.2", 00:22:23.288 "trsvcid": "4420" 00:22:23.288 }, 00:22:23.288 "secure_channel": true 00:22:23.288 } 00:22:23.288 } 00:22:23.288 ] 00:22:23.288 } 00:22:23.288 ] 00:22:23.288 }' 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1667098 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1667098 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667098 ']' 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.288 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.288 [2024-12-06 17:38:15.279721] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:23.288 [2024-12-06 17:38:15.279773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.547 [2024-12-06 17:38:15.367448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.547 [2024-12-06 17:38:15.397481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.547 [2024-12-06 17:38:15.397516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.547 [2024-12-06 17:38:15.397522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.547 [2024-12-06 17:38:15.397527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.547 [2024-12-06 17:38:15.397531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.547 [2024-12-06 17:38:15.398022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.547 [2024-12-06 17:38:15.591522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.806 [2024-12-06 17:38:15.623546] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.806 [2024-12-06 17:38:15.623773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1667132 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1667132 /var/tmp/bdevperf.sock 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667132 ']' 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.065 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:24.065 "subsystems": [ 00:22:24.065 { 00:22:24.065 "subsystem": "keyring", 00:22:24.065 "config": [ 00:22:24.065 { 00:22:24.065 "method": "keyring_file_add_key", 00:22:24.065 "params": { 00:22:24.065 "name": "key0", 00:22:24.065 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:24.065 } 00:22:24.065 } 00:22:24.065 ] 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "subsystem": "iobuf", 00:22:24.065 "config": [ 00:22:24.065 { 00:22:24.065 "method": "iobuf_set_options", 00:22:24.065 "params": { 00:22:24.065 "small_pool_count": 8192, 00:22:24.065 "large_pool_count": 1024, 00:22:24.065 "small_bufsize": 8192, 00:22:24.065 "large_bufsize": 135168, 00:22:24.065 "enable_numa": false 00:22:24.065 } 00:22:24.065 } 00:22:24.065 ] 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "subsystem": "sock", 00:22:24.065 "config": [ 00:22:24.065 { 00:22:24.065 "method": "sock_set_default_impl", 00:22:24.065 "params": { 00:22:24.065 "impl_name": "posix" 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "sock_impl_set_options", 00:22:24.065 "params": { 00:22:24.065 "impl_name": "ssl", 00:22:24.065 "recv_buf_size": 4096, 00:22:24.065 "send_buf_size": 4096, 00:22:24.065 "enable_recv_pipe": true, 00:22:24.065 "enable_quickack": false, 00:22:24.065 "enable_placement_id": 0, 00:22:24.065 "enable_zerocopy_send_server": true, 00:22:24.065 "enable_zerocopy_send_client": false, 00:22:24.065 "zerocopy_threshold": 0, 00:22:24.065 "tls_version": 0, 00:22:24.065 "enable_ktls": false 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "sock_impl_set_options", 00:22:24.065 "params": { 00:22:24.065 "impl_name": "posix", 00:22:24.065 "recv_buf_size": 2097152, 00:22:24.065 "send_buf_size": 2097152, 00:22:24.065 "enable_recv_pipe": true, 00:22:24.065 "enable_quickack": false, 00:22:24.065 "enable_placement_id": 0, 00:22:24.065 "enable_zerocopy_send_server": true, 00:22:24.065 "enable_zerocopy_send_client": false, 00:22:24.065 "zerocopy_threshold": 0, 00:22:24.065 "tls_version": 0, 00:22:24.065 "enable_ktls": false 00:22:24.065 } 00:22:24.065 } 00:22:24.065 ] 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "subsystem": "vmd", 00:22:24.065 "config": [] 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "subsystem": "accel", 00:22:24.065 "config": [ 00:22:24.065 { 00:22:24.065 "method": "accel_set_options", 00:22:24.065 "params": { 00:22:24.065 "small_cache_size": 128, 00:22:24.065 "large_cache_size": 16, 00:22:24.065 "task_count": 2048, 00:22:24.065 "sequence_count": 2048, 00:22:24.065 "buf_count": 2048 00:22:24.065 } 00:22:24.065 } 00:22:24.065 ] 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "subsystem": "bdev", 00:22:24.065 "config": [ 00:22:24.065 { 00:22:24.065 "method": "bdev_set_options", 00:22:24.065 "params": { 00:22:24.065 "bdev_io_pool_size": 65535, 00:22:24.065 "bdev_io_cache_size": 256, 00:22:24.065 "bdev_auto_examine": true, 00:22:24.065 "iobuf_small_cache_size": 128, 00:22:24.065 "iobuf_large_cache_size": 16 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "bdev_raid_set_options", 00:22:24.065 "params": { 00:22:24.065 "process_window_size_kb": 1024, 00:22:24.065 "process_max_bandwidth_mb_sec": 0 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "bdev_iscsi_set_options", 00:22:24.065 "params": { 00:22:24.065 "timeout_sec": 30 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "bdev_nvme_set_options", 00:22:24.065 "params": { 00:22:24.065 "action_on_timeout": "none", 00:22:24.065 "timeout_us": 0, 00:22:24.065 "timeout_admin_us": 0, 00:22:24.065 "keep_alive_timeout_ms": 10000, 00:22:24.065 "arbitration_burst": 0, 00:22:24.065 "low_priority_weight": 0, 00:22:24.065 "medium_priority_weight": 0, 00:22:24.065 "high_priority_weight": 0, 00:22:24.065 "nvme_adminq_poll_period_us": 10000, 00:22:24.065 "nvme_ioq_poll_period_us": 0, 00:22:24.065 "io_queue_requests": 512, 00:22:24.065 "delay_cmd_submit": true, 00:22:24.065 "transport_retry_count": 4, 00:22:24.065 "bdev_retry_count": 3, 00:22:24.065 "transport_ack_timeout": 0, 00:22:24.065 "ctrlr_loss_timeout_sec": 0, 00:22:24.065 "reconnect_delay_sec": 0, 00:22:24.065 "fast_io_fail_timeout_sec": 0, 00:22:24.065 "disable_auto_failback": false, 00:22:24.065 "generate_uuids": false, 00:22:24.065 "transport_tos": 0, 00:22:24.065 "nvme_error_stat": false, 00:22:24.065 "rdma_srq_size": 0, 00:22:24.065 "io_path_stat": false, 00:22:24.065 "allow_accel_sequence": false, 00:22:24.065 "rdma_max_cq_size": 0, 00:22:24.065 "rdma_cm_event_timeout_ms": 0, 00:22:24.065 "dhchap_digests": [ 00:22:24.065 "sha256", 00:22:24.065 "sha384", 00:22:24.065 "sha512" 00:22:24.065 ], 00:22:24.065 "dhchap_dhgroups": [ 00:22:24.065 "null", 00:22:24.065 "ffdhe2048", 00:22:24.065 "ffdhe3072", 00:22:24.065 "ffdhe4096", 00:22:24.065 "ffdhe6144", 00:22:24.065 "ffdhe8192" 00:22:24.065 ] 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "bdev_nvme_attach_controller", 00:22:24.065 "params": { 00:22:24.065 "name": "TLSTEST", 00:22:24.065 "trtype": "TCP", 00:22:24.065 "adrfam": "IPv4", 00:22:24.065 "traddr": "10.0.0.2", 00:22:24.065 "trsvcid": "4420", 00:22:24.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.065 "prchk_reftag": false, 00:22:24.065 "prchk_guard": false, 00:22:24.065 "ctrlr_loss_timeout_sec": 0, 00:22:24.065 "reconnect_delay_sec": 0, 00:22:24.065 "fast_io_fail_timeout_sec": 0, 00:22:24.065 "psk": "key0", 00:22:24.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.065 "hdgst": false, 00:22:24.065 "ddgst": false, 00:22:24.065 "multipath": "multipath" 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "bdev_nvme_set_hotplug", 00:22:24.065 "params": { 00:22:24.065 "period_us": 100000, 00:22:24.065 "enable": false 00:22:24.065 } 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "method": "bdev_wait_for_examine" 00:22:24.065 } 00:22:24.065 ] 00:22:24.065 }, 00:22:24.065 { 00:22:24.065 "subsystem": "nbd", 00:22:24.065 "config": [] 00:22:24.065 } 00:22:24.065 ] 00:22:24.065 }' 00:22:24.324 [2024-12-06 17:38:16.181922] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:24.324 [2024-12-06 17:38:16.181978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667132 ] 00:22:24.324 [2024-12-06 17:38:16.265479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.324 [2024-12-06 17:38:16.294478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.583 [2024-12-06 17:38:16.429339] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.152 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.152 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:25.152 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:25.152 Running I/O for 10 seconds... 00:22:27.024 4937.00 IOPS, 19.29 MiB/s [2024-12-06T16:38:20.471Z] 5698.50 IOPS, 22.26 MiB/s [2024-12-06T16:38:21.409Z] 5509.67 IOPS, 21.52 MiB/s [2024-12-06T16:38:22.348Z] 5599.50 IOPS, 21.87 MiB/s [2024-12-06T16:38:23.287Z] 5534.40 IOPS, 21.62 MiB/s [2024-12-06T16:38:24.224Z] 5467.67 IOPS, 21.36 MiB/s [2024-12-06T16:38:25.162Z] 5390.14 IOPS, 21.06 MiB/s [2024-12-06T16:38:26.100Z] 5424.25 IOPS, 21.19 MiB/s [2024-12-06T16:38:27.478Z] 5458.89 IOPS, 21.32 MiB/s [2024-12-06T16:38:27.478Z] 5472.00 IOPS, 21.38 MiB/s 00:22:35.412 Latency(us) 00:22:35.412 [2024-12-06T16:38:27.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.412 Verification LBA range: start 0x0 length 0x2000 00:22:35.412 TLSTESTn1 : 10.01 5476.64 21.39 0.00 0.00 23341.08 5297.49 234181.97 00:22:35.412 [2024-12-06T16:38:27.478Z] =================================================================================================================== 00:22:35.412 [2024-12-06T16:38:27.478Z] Total : 5476.64 21.39 0.00 0.00 23341.08 5297.49 234181.97 00:22:35.412 { 00:22:35.412 "results": [ 00:22:35.412 { 00:22:35.412 "job": "TLSTESTn1", 00:22:35.412 "core_mask": "0x4", 00:22:35.412 "workload": "verify", 00:22:35.412 "status": "finished", 00:22:35.412 "verify_range": { 00:22:35.412 "start": 0, 00:22:35.412 "length": 8192 00:22:35.412 }, 00:22:35.412 "queue_depth": 128, 00:22:35.412 "io_size": 4096, 00:22:35.412 "runtime": 10.014539, 00:22:35.412 "iops": 5476.637516714449, 00:22:35.412 "mibps": 21.393115299665816, 00:22:35.412 "io_failed": 0, 00:22:35.412 "io_timeout": 0, 00:22:35.412 "avg_latency_us": 23341.084055476545, 00:22:35.412 "min_latency_us": 5297.493333333333, 00:22:35.412 "max_latency_us": 234181.97333333333 00:22:35.412 } 00:22:35.412 ], 00:22:35.412 "core_count": 1 00:22:35.412 } 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1667132 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667132 ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667132 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667132 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667132' 00:22:35.412 killing process with pid 1667132 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667132 00:22:35.412 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.412 00:22:35.412 Latency(us) 00:22:35.412 [2024-12-06T16:38:27.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.412 [2024-12-06T16:38:27.478Z] =================================================================================================================== 00:22:35.412 [2024-12-06T16:38:27.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667132 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1667098 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667098 ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667098 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667098 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667098' 00:22:35.412 killing process with pid 1667098 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667098 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667098 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1667281 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1667281 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667281 ']' 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.412 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.413 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.413 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.673 [2024-12-06 17:38:27.526155] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:35.673 [2024-12-06 17:38:27.526207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.673 [2024-12-06 17:38:27.619815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.673 [2024-12-06 17:38:27.660384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.673 [2024-12-06 17:38:27.660433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.673 [2024-12-06 17:38:27.660442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.673 [2024-12-06 17:38:27.660449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.673 [2024-12-06 17:38:27.660455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.673 [2024-12-06 17:38:27.661158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.SI9CzZBvy3 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SI9CzZBvy3 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:36.614 [2024-12-06 17:38:28.531821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.614 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:36.874 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:36.874 [2024-12-06 17:38:28.920791] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.874 [2024-12-06 17:38:28.921128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.134 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:37.134 malloc0 00:22:37.134 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:37.395 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:37.654 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1667335 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1667335 /var/tmp/bdevperf.sock 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667335 ']' 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.914 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.914 [2024-12-06 17:38:29.769398] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:37.914 [2024-12-06 17:38:29.769469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667335 ] 00:22:37.914 [2024-12-06 17:38:29.859398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.914 [2024-12-06 17:38:29.893577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.875 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.875 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:38.875 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:38.875 17:38:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:38.875 [2024-12-06 17:38:30.908791] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.133 nvme0n1 00:22:39.133 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.133 Running I/O for 1 seconds... 00:22:40.069 4294.00 IOPS, 16.77 MiB/s 00:22:40.069 Latency(us) 00:22:40.069 [2024-12-06T16:38:32.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.069 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:40.069 Verification LBA range: start 0x0 length 0x2000 00:22:40.069 nvme0n1 : 1.02 4329.05 16.91 0.00 0.00 29366.32 4587.52 38666.24 00:22:40.069 [2024-12-06T16:38:32.135Z] =================================================================================================================== 00:22:40.069 [2024-12-06T16:38:32.135Z] Total : 4329.05 16.91 0.00 0.00 29366.32 4587.52 38666.24 00:22:40.069 { 00:22:40.069 "results": [ 00:22:40.069 { 00:22:40.069 "job": "nvme0n1", 00:22:40.069 "core_mask": "0x2", 00:22:40.069 "workload": "verify", 00:22:40.069 "status": "finished", 00:22:40.069 "verify_range": { 00:22:40.069 "start": 0, 00:22:40.069 "length": 8192 00:22:40.069 }, 00:22:40.069 "queue_depth": 128, 00:22:40.069 "io_size": 4096, 00:22:40.069 "runtime": 1.021703, 00:22:40.069 "iops": 4329.04669948116, 00:22:40.069 "mibps": 16.910338669848283, 00:22:40.069 "io_failed": 0, 00:22:40.069 "io_timeout": 0, 00:22:40.069 "avg_latency_us": 29366.322779410657, 00:22:40.069 "min_latency_us": 4587.52, 00:22:40.069 "max_latency_us": 38666.24 00:22:40.069 } 00:22:40.069 ], 00:22:40.069 "core_count": 1 00:22:40.069 } 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1667335 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667335 ']' 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667335 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667335 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667335' 00:22:40.328 killing process with pid 1667335 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667335 00:22:40.328 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.328 00:22:40.328 Latency(us) 00:22:40.328 [2024-12-06T16:38:32.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.328 [2024-12-06T16:38:32.394Z] =================================================================================================================== 00:22:40.328 [2024-12-06T16:38:32.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667335 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1667281 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667281 ']' 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667281 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667281 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667281' 00:22:40.328 killing process with pid 1667281 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667281 00:22:40.328 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667281 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1667385 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1667385 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667385 ']' 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.588 17:38:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.588 [2024-12-06 17:38:32.528292] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:40.588 [2024-12-06 17:38:32.528344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.588 [2024-12-06 17:38:32.616950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.588 [2024-12-06 17:38:32.645778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.588 [2024-12-06 17:38:32.645809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.588 [2024-12-06 17:38:32.645815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.588 [2024-12-06 17:38:32.645819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.588 [2024-12-06 17:38:32.645824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.588 [2024-12-06 17:38:32.646273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.526 [2024-12-06 17:38:33.378644] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.526 malloc0 00:22:41.526 [2024-12-06 17:38:33.404406] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.526 [2024-12-06 17:38:33.404621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1667420 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1667420 /var/tmp/bdevperf.sock 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667420 ']' 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.526 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.526 [2024-12-06 17:38:33.484827] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:41.526 [2024-12-06 17:38:33.484876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667420 ] 00:22:41.526 [2024-12-06 17:38:33.567697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.785 [2024-12-06 17:38:33.597367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.352 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.352 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:42.352 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SI9CzZBvy3 00:22:42.611 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:42.611 [2024-12-06 17:38:34.621848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.870 nvme0n1 00:22:42.870 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.870 Running I/O for 1 seconds... 00:22:43.811 6303.00 IOPS, 24.62 MiB/s 00:22:43.811 Latency(us) 00:22:43.811 [2024-12-06T16:38:35.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.812 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:43.812 Verification LBA range: start 0x0 length 0x2000 00:22:43.812 nvme0n1 : 1.01 6343.56 24.78 0.00 0.00 20045.15 4505.60 22719.15 00:22:43.812 [2024-12-06T16:38:35.878Z] =================================================================================================================== 00:22:43.812 [2024-12-06T16:38:35.878Z] Total : 6343.56 24.78 0.00 0.00 20045.15 4505.60 22719.15 00:22:43.812 { 00:22:43.812 "results": [ 00:22:43.812 { 00:22:43.812 "job": "nvme0n1", 00:22:43.812 "core_mask": "0x2", 00:22:43.812 "workload": "verify", 00:22:43.812 "status": "finished", 00:22:43.812 "verify_range": { 00:22:43.812 "start": 0, 00:22:43.812 "length": 8192 00:22:43.812 }, 00:22:43.812 "queue_depth": 128, 00:22:43.812 "io_size": 4096, 00:22:43.812 "runtime": 1.013784, 00:22:43.812 "iops": 6343.560363943404, 00:22:43.812 "mibps": 24.77953267165392, 00:22:43.812 "io_failed": 0, 00:22:43.812 "io_timeout": 0, 00:22:43.812 "avg_latency_us": 20045.154814699632, 00:22:43.812 "min_latency_us": 4505.6, 00:22:43.812 "max_latency_us": 22719.146666666667 00:22:43.812 } 00:22:43.812 ], 00:22:43.812 "core_count": 1 00:22:43.812 } 00:22:43.812 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:43.812 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.812 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.071 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.071 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:44.071 "subsystems": [ 00:22:44.071 { 00:22:44.071 "subsystem": "keyring", 00:22:44.071 "config": [ 00:22:44.071 { 00:22:44.071 "method": "keyring_file_add_key", 00:22:44.071 "params": { 00:22:44.071 "name": "key0", 00:22:44.071 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:44.071 } 00:22:44.071 } 00:22:44.071 ] 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "subsystem": "iobuf", 00:22:44.071 "config": [ 00:22:44.071 { 00:22:44.071 "method": "iobuf_set_options", 00:22:44.071 "params": { 00:22:44.071 "small_pool_count": 8192, 00:22:44.071 "large_pool_count": 1024, 00:22:44.071 "small_bufsize": 8192, 00:22:44.071 "large_bufsize": 135168, 00:22:44.071 "enable_numa": false 00:22:44.071 } 00:22:44.071 } 00:22:44.071 ] 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "subsystem": "sock", 00:22:44.071 "config": [ 00:22:44.071 { 00:22:44.071 "method": "sock_set_default_impl", 00:22:44.071 "params": { 00:22:44.071 "impl_name": "posix" 00:22:44.071 } 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "method": "sock_impl_set_options", 00:22:44.071 "params": { 00:22:44.071 "impl_name": "ssl", 00:22:44.071 "recv_buf_size": 4096, 00:22:44.071 "send_buf_size": 4096, 00:22:44.071 "enable_recv_pipe": true, 00:22:44.071 "enable_quickack": false, 00:22:44.071 "enable_placement_id": 0, 00:22:44.071 "enable_zerocopy_send_server": true, 00:22:44.071 "enable_zerocopy_send_client": false, 00:22:44.071 "zerocopy_threshold": 0, 00:22:44.071 "tls_version": 0, 00:22:44.071 "enable_ktls": false 00:22:44.071 } 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "method": "sock_impl_set_options", 00:22:44.071 "params": { 00:22:44.071 "impl_name": "posix", 00:22:44.071 "recv_buf_size": 2097152, 00:22:44.071 "send_buf_size": 2097152, 00:22:44.071 "enable_recv_pipe": true, 00:22:44.071 "enable_quickack": false, 00:22:44.071 "enable_placement_id": 0, 00:22:44.071 "enable_zerocopy_send_server": true, 00:22:44.071 "enable_zerocopy_send_client": false, 00:22:44.071 "zerocopy_threshold": 0, 00:22:44.071 "tls_version": 0, 00:22:44.071 "enable_ktls": false 00:22:44.071 } 00:22:44.071 } 00:22:44.071 ] 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "subsystem": "vmd", 00:22:44.071 "config": [] 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "subsystem": "accel", 00:22:44.071 "config": [ 00:22:44.071 { 00:22:44.071 "method": "accel_set_options", 00:22:44.071 "params": { 00:22:44.071 "small_cache_size": 128, 00:22:44.071 "large_cache_size": 16, 00:22:44.071 "task_count": 2048, 00:22:44.071 "sequence_count": 2048, 00:22:44.071 "buf_count": 2048 00:22:44.071 } 00:22:44.071 } 00:22:44.071 ] 00:22:44.071 }, 00:22:44.071 { 00:22:44.071 "subsystem": "bdev", 00:22:44.071 "config": [ 00:22:44.071 { 00:22:44.071 "method": "bdev_set_options", 00:22:44.071 "params": { 00:22:44.071 "bdev_io_pool_size": 65535, 00:22:44.071 "bdev_io_cache_size": 256, 00:22:44.071 "bdev_auto_examine": true, 00:22:44.071 "iobuf_small_cache_size": 128, 00:22:44.072 "iobuf_large_cache_size": 16 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "bdev_raid_set_options", 00:22:44.072 "params": { 00:22:44.072 "process_window_size_kb": 1024, 00:22:44.072 "process_max_bandwidth_mb_sec": 0 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "bdev_iscsi_set_options", 00:22:44.072 "params": { 00:22:44.072 "timeout_sec": 30 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "bdev_nvme_set_options", 00:22:44.072 "params": { 00:22:44.072 "action_on_timeout": "none", 00:22:44.072 "timeout_us": 0, 00:22:44.072 "timeout_admin_us": 0, 00:22:44.072 "keep_alive_timeout_ms": 10000, 00:22:44.072 "arbitration_burst": 0, 00:22:44.072 "low_priority_weight": 0, 00:22:44.072 "medium_priority_weight": 0, 00:22:44.072 "high_priority_weight": 0, 00:22:44.072 "nvme_adminq_poll_period_us": 10000, 00:22:44.072 "nvme_ioq_poll_period_us": 0, 00:22:44.072 "io_queue_requests": 0, 00:22:44.072 "delay_cmd_submit": true, 00:22:44.072 "transport_retry_count": 4, 00:22:44.072 "bdev_retry_count": 3, 00:22:44.072 "transport_ack_timeout": 0, 00:22:44.072 "ctrlr_loss_timeout_sec": 0, 00:22:44.072 "reconnect_delay_sec": 0, 00:22:44.072 "fast_io_fail_timeout_sec": 0, 00:22:44.072 "disable_auto_failback": false, 00:22:44.072 "generate_uuids": false, 00:22:44.072 "transport_tos": 0, 00:22:44.072 "nvme_error_stat": false, 00:22:44.072 "rdma_srq_size": 0, 00:22:44.072 "io_path_stat": false, 00:22:44.072 "allow_accel_sequence": false, 00:22:44.072 "rdma_max_cq_size": 0, 00:22:44.072 "rdma_cm_event_timeout_ms": 0, 00:22:44.072 "dhchap_digests": [ 00:22:44.072 "sha256", 00:22:44.072 "sha384", 00:22:44.072 "sha512" 00:22:44.072 ], 00:22:44.072 "dhchap_dhgroups": [ 00:22:44.072 "null", 00:22:44.072 "ffdhe2048", 00:22:44.072 "ffdhe3072", 00:22:44.072 "ffdhe4096", 00:22:44.072 "ffdhe6144", 00:22:44.072 "ffdhe8192" 00:22:44.072 ] 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "bdev_nvme_set_hotplug", 00:22:44.072 "params": { 00:22:44.072 "period_us": 100000, 00:22:44.072 "enable": false 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "bdev_malloc_create", 00:22:44.072 "params": { 00:22:44.072 "name": "malloc0", 00:22:44.072 "num_blocks": 8192, 00:22:44.072 "block_size": 4096, 00:22:44.072 "physical_block_size": 4096, 00:22:44.072 "uuid": "9ae4cd43-4069-450d-bfd8-6180953e1490", 00:22:44.072 "optimal_io_boundary": 0, 00:22:44.072 "md_size": 0, 00:22:44.072 "dif_type": 0, 00:22:44.072 "dif_is_head_of_md": false, 00:22:44.072 "dif_pi_format": 0 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "bdev_wait_for_examine" 00:22:44.072 } 00:22:44.072 ] 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "subsystem": "nbd", 00:22:44.072 "config": [] 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "subsystem": "scheduler", 00:22:44.072 "config": [ 00:22:44.072 { 00:22:44.072 "method": "framework_set_scheduler", 00:22:44.072 "params": { 00:22:44.072 "name": "static" 00:22:44.072 } 00:22:44.072 } 00:22:44.072 ] 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "subsystem": "nvmf", 00:22:44.072 "config": [ 00:22:44.072 { 00:22:44.072 "method": "nvmf_set_config", 00:22:44.072 "params": { 00:22:44.072 "discovery_filter": "match_any", 00:22:44.072 "admin_cmd_passthru": { 00:22:44.072 "identify_ctrlr": false 00:22:44.072 }, 00:22:44.072 "dhchap_digests": [ 00:22:44.072 "sha256", 00:22:44.072 "sha384", 00:22:44.072 "sha512" 00:22:44.072 ], 00:22:44.072 "dhchap_dhgroups": [ 00:22:44.072 "null", 00:22:44.072 "ffdhe2048", 00:22:44.072 "ffdhe3072", 00:22:44.072 "ffdhe4096", 00:22:44.072 "ffdhe6144", 00:22:44.072 "ffdhe8192" 00:22:44.072 ] 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_set_max_subsystems", 00:22:44.072 "params": { 00:22:44.072 "max_subsystems": 1024 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_set_crdt", 00:22:44.072 "params": { 00:22:44.072 "crdt1": 0, 00:22:44.072 "crdt2": 0, 00:22:44.072 "crdt3": 0 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_create_transport", 00:22:44.072 "params": { 00:22:44.072 "trtype": "TCP", 00:22:44.072 "max_queue_depth": 128, 00:22:44.072 "max_io_qpairs_per_ctrlr": 127, 00:22:44.072 "in_capsule_data_size": 4096, 00:22:44.072 "max_io_size": 131072, 00:22:44.072 "io_unit_size": 131072, 00:22:44.072 "max_aq_depth": 128, 00:22:44.072 "num_shared_buffers": 511, 00:22:44.072 "buf_cache_size": 4294967295, 00:22:44.072 "dif_insert_or_strip": false, 00:22:44.072 "zcopy": false, 00:22:44.072 "c2h_success": false, 00:22:44.072 "sock_priority": 0, 00:22:44.072 "abort_timeout_sec": 1, 00:22:44.072 "ack_timeout": 0, 00:22:44.072 "data_wr_pool_size": 0 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_create_subsystem", 00:22:44.072 "params": { 00:22:44.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.072 "allow_any_host": false, 00:22:44.072 "serial_number": "00000000000000000000", 00:22:44.072 "model_number": "SPDK bdev Controller", 00:22:44.072 "max_namespaces": 32, 00:22:44.072 "min_cntlid": 1, 00:22:44.072 "max_cntlid": 65519, 00:22:44.072 "ana_reporting": false 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_subsystem_add_host", 00:22:44.072 "params": { 00:22:44.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.072 "host": "nqn.2016-06.io.spdk:host1", 00:22:44.072 "psk": "key0" 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_subsystem_add_ns", 00:22:44.072 "params": { 00:22:44.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.072 "namespace": { 00:22:44.072 "nsid": 1, 00:22:44.072 "bdev_name": "malloc0", 00:22:44.072 "nguid": "9AE4CD434069450DBFD86180953E1490", 00:22:44.072 "uuid": "9ae4cd43-4069-450d-bfd8-6180953e1490", 00:22:44.072 "no_auto_visible": false 00:22:44.072 } 00:22:44.072 } 00:22:44.072 }, 00:22:44.072 { 00:22:44.072 "method": "nvmf_subsystem_add_listener", 00:22:44.072 "params": { 00:22:44.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.072 "listen_address": { 00:22:44.072 "trtype": "TCP", 00:22:44.072 "adrfam": "IPv4", 00:22:44.072 "traddr": "10.0.0.2", 00:22:44.072 "trsvcid": "4420" 00:22:44.072 }, 00:22:44.072 "secure_channel": false, 00:22:44.072 "sock_impl": "ssl" 00:22:44.072 } 00:22:44.072 } 00:22:44.072 ] 00:22:44.072 } 00:22:44.072 ] 00:22:44.072 }' 00:22:44.072 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:44.332 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:44.332 "subsystems": [ 00:22:44.332 { 00:22:44.332 "subsystem": "keyring", 00:22:44.332 "config": [ 00:22:44.332 { 00:22:44.332 "method": "keyring_file_add_key", 00:22:44.332 "params": { 00:22:44.332 "name": "key0", 00:22:44.332 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:44.332 } 00:22:44.332 } 00:22:44.332 ] 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "subsystem": "iobuf", 00:22:44.332 "config": [ 00:22:44.332 { 00:22:44.332 "method": "iobuf_set_options", 00:22:44.332 "params": { 00:22:44.332 "small_pool_count": 8192, 00:22:44.332 "large_pool_count": 1024, 00:22:44.332 "small_bufsize": 8192, 00:22:44.332 "large_bufsize": 135168, 00:22:44.332 "enable_numa": false 00:22:44.332 } 00:22:44.332 } 00:22:44.332 ] 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "subsystem": "sock", 00:22:44.332 "config": [ 00:22:44.332 { 00:22:44.332 "method": "sock_set_default_impl", 00:22:44.332 "params": { 00:22:44.332 "impl_name": "posix" 00:22:44.332 } 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "method": "sock_impl_set_options", 00:22:44.332 "params": { 00:22:44.332 "impl_name": "ssl", 00:22:44.332 "recv_buf_size": 4096, 00:22:44.332 "send_buf_size": 4096, 00:22:44.332 "enable_recv_pipe": true, 00:22:44.332 "enable_quickack": false, 00:22:44.332 "enable_placement_id": 0, 00:22:44.332 "enable_zerocopy_send_server": true, 00:22:44.332 "enable_zerocopy_send_client": false, 00:22:44.332 "zerocopy_threshold": 0, 00:22:44.332 "tls_version": 0, 00:22:44.332 "enable_ktls": false 00:22:44.332 } 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "method": "sock_impl_set_options", 00:22:44.332 "params": { 00:22:44.332 "impl_name": "posix", 00:22:44.332 "recv_buf_size": 2097152, 00:22:44.332 "send_buf_size": 2097152, 00:22:44.332 "enable_recv_pipe": true, 00:22:44.332 "enable_quickack": false, 00:22:44.332 "enable_placement_id": 0, 00:22:44.332 "enable_zerocopy_send_server": true, 00:22:44.332 "enable_zerocopy_send_client": false, 00:22:44.332 "zerocopy_threshold": 0, 00:22:44.332 "tls_version": 0, 00:22:44.332 "enable_ktls": false 00:22:44.332 } 00:22:44.332 } 00:22:44.332 ] 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "subsystem": "vmd", 00:22:44.332 "config": [] 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "subsystem": "accel", 00:22:44.332 "config": [ 00:22:44.332 { 00:22:44.332 "method": "accel_set_options", 00:22:44.332 "params": { 00:22:44.332 "small_cache_size": 128, 00:22:44.332 "large_cache_size": 16, 00:22:44.332 "task_count": 2048, 00:22:44.332 "sequence_count": 2048, 00:22:44.332 "buf_count": 2048 00:22:44.332 } 00:22:44.332 } 00:22:44.332 ] 00:22:44.332 }, 00:22:44.332 { 00:22:44.332 "subsystem": "bdev", 00:22:44.332 "config": [ 00:22:44.332 { 00:22:44.332 "method": "bdev_set_options", 00:22:44.332 "params": { 00:22:44.333 "bdev_io_pool_size": 65535, 00:22:44.333 "bdev_io_cache_size": 256, 00:22:44.333 "bdev_auto_examine": true, 00:22:44.333 "iobuf_small_cache_size": 128, 00:22:44.333 "iobuf_large_cache_size": 16 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_raid_set_options", 00:22:44.333 "params": { 00:22:44.333 "process_window_size_kb": 1024, 00:22:44.333 "process_max_bandwidth_mb_sec": 0 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_iscsi_set_options", 00:22:44.333 "params": { 00:22:44.333 "timeout_sec": 30 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_nvme_set_options", 00:22:44.333 "params": { 00:22:44.333 "action_on_timeout": "none", 00:22:44.333 "timeout_us": 0, 00:22:44.333 "timeout_admin_us": 0, 00:22:44.333 "keep_alive_timeout_ms": 10000, 00:22:44.333 "arbitration_burst": 0, 00:22:44.333 "low_priority_weight": 0, 00:22:44.333 "medium_priority_weight": 0, 00:22:44.333 "high_priority_weight": 0, 00:22:44.333 "nvme_adminq_poll_period_us": 10000, 00:22:44.333 "nvme_ioq_poll_period_us": 0, 00:22:44.333 "io_queue_requests": 512, 00:22:44.333 "delay_cmd_submit": true, 00:22:44.333 "transport_retry_count": 4, 00:22:44.333 "bdev_retry_count": 3, 00:22:44.333 "transport_ack_timeout": 0, 00:22:44.333 "ctrlr_loss_timeout_sec": 0, 00:22:44.333 "reconnect_delay_sec": 0, 00:22:44.333 "fast_io_fail_timeout_sec": 0, 00:22:44.333 "disable_auto_failback": false, 00:22:44.333 "generate_uuids": false, 00:22:44.333 "transport_tos": 0, 00:22:44.333 "nvme_error_stat": false, 00:22:44.333 "rdma_srq_size": 0, 00:22:44.333 "io_path_stat": false, 00:22:44.333 "allow_accel_sequence": false, 00:22:44.333 "rdma_max_cq_size": 0, 00:22:44.333 "rdma_cm_event_timeout_ms": 0, 00:22:44.333 "dhchap_digests": [ 00:22:44.333 "sha256", 00:22:44.333 "sha384", 00:22:44.333 "sha512" 00:22:44.333 ], 00:22:44.333 "dhchap_dhgroups": [ 00:22:44.333 "null", 00:22:44.333 "ffdhe2048", 00:22:44.333 "ffdhe3072", 00:22:44.333 "ffdhe4096", 00:22:44.333 "ffdhe6144", 00:22:44.333 "ffdhe8192" 00:22:44.333 ] 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_nvme_attach_controller", 00:22:44.333 "params": { 00:22:44.333 "name": "nvme0", 00:22:44.333 "trtype": "TCP", 00:22:44.333 "adrfam": "IPv4", 00:22:44.333 "traddr": "10.0.0.2", 00:22:44.333 "trsvcid": "4420", 00:22:44.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.333 "prchk_reftag": false, 00:22:44.333 "prchk_guard": false, 00:22:44.333 "ctrlr_loss_timeout_sec": 0, 00:22:44.333 "reconnect_delay_sec": 0, 00:22:44.333 "fast_io_fail_timeout_sec": 0, 00:22:44.333 "psk": "key0", 00:22:44.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.333 "hdgst": false, 00:22:44.333 "ddgst": false, 00:22:44.333 "multipath": "multipath" 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_nvme_set_hotplug", 00:22:44.333 "params": { 00:22:44.333 "period_us": 100000, 00:22:44.333 "enable": false 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_enable_histogram", 00:22:44.333 "params": { 00:22:44.333 "name": "nvme0n1", 00:22:44.333 "enable": true 00:22:44.333 } 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "method": "bdev_wait_for_examine" 00:22:44.333 } 00:22:44.333 ] 00:22:44.333 }, 00:22:44.333 { 00:22:44.333 "subsystem": "nbd", 00:22:44.333 "config": [] 00:22:44.333 } 00:22:44.333 ] 00:22:44.333 }' 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1667420 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667420 ']' 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667420 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667420 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667420' 00:22:44.333 killing process with pid 1667420 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667420 00:22:44.333 Received shutdown signal, test time was about 1.000000 seconds 00:22:44.333 00:22:44.333 Latency(us) 00:22:44.333 [2024-12-06T16:38:36.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.333 [2024-12-06T16:38:36.399Z] =================================================================================================================== 00:22:44.333 [2024-12-06T16:38:36.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667420 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1667385 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667385 ']' 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667385 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.333 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667385 00:22:44.593 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.593 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.593 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667385' 00:22:44.593 killing process with pid 1667385 00:22:44.593 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667385 00:22:44.594 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667385 00:22:44.594 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:44.594 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.594 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.594 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:44.594 "subsystems": [ 00:22:44.594 { 00:22:44.594 "subsystem": "keyring", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "keyring_file_add_key", 00:22:44.594 "params": { 00:22:44.594 "name": "key0", 00:22:44.594 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:44.594 } 00:22:44.594 } 00:22:44.594 ] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "iobuf", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "iobuf_set_options", 00:22:44.594 "params": { 00:22:44.594 "small_pool_count": 8192, 00:22:44.594 "large_pool_count": 1024, 00:22:44.594 "small_bufsize": 8192, 00:22:44.594 "large_bufsize": 135168, 00:22:44.594 "enable_numa": false 00:22:44.594 } 00:22:44.594 } 00:22:44.594 ] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "sock", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "sock_set_default_impl", 00:22:44.594 "params": { 00:22:44.594 "impl_name": "posix" 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "sock_impl_set_options", 00:22:44.594 "params": { 00:22:44.594 "impl_name": "ssl", 00:22:44.594 "recv_buf_size": 4096, 00:22:44.594 "send_buf_size": 4096, 00:22:44.594 "enable_recv_pipe": true, 00:22:44.594 "enable_quickack": false, 00:22:44.594 "enable_placement_id": 0, 00:22:44.594 "enable_zerocopy_send_server": true, 00:22:44.594 "enable_zerocopy_send_client": false, 00:22:44.594 "zerocopy_threshold": 0, 00:22:44.594 "tls_version": 0, 00:22:44.594 "enable_ktls": false 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "sock_impl_set_options", 00:22:44.594 "params": { 00:22:44.594 "impl_name": "posix", 00:22:44.594 "recv_buf_size": 2097152, 00:22:44.594 "send_buf_size": 2097152, 00:22:44.594 "enable_recv_pipe": true, 00:22:44.594 "enable_quickack": false, 00:22:44.594 "enable_placement_id": 0, 00:22:44.594 "enable_zerocopy_send_server": true, 00:22:44.594 "enable_zerocopy_send_client": false, 00:22:44.594 "zerocopy_threshold": 0, 00:22:44.594 "tls_version": 0, 00:22:44.594 "enable_ktls": false 00:22:44.594 } 00:22:44.594 } 00:22:44.594 ] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "vmd", 00:22:44.594 "config": [] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "accel", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "accel_set_options", 00:22:44.594 "params": { 00:22:44.594 "small_cache_size": 128, 00:22:44.594 "large_cache_size": 16, 00:22:44.594 "task_count": 2048, 00:22:44.594 "sequence_count": 2048, 00:22:44.594 "buf_count": 2048 00:22:44.594 } 00:22:44.594 } 00:22:44.594 ] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "bdev", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "bdev_set_options", 00:22:44.594 "params": { 00:22:44.594 "bdev_io_pool_size": 65535, 00:22:44.594 "bdev_io_cache_size": 256, 00:22:44.594 "bdev_auto_examine": true, 00:22:44.594 "iobuf_small_cache_size": 128, 00:22:44.594 "iobuf_large_cache_size": 16 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "bdev_raid_set_options", 00:22:44.594 "params": { 00:22:44.594 "process_window_size_kb": 1024, 00:22:44.594 "process_max_bandwidth_mb_sec": 0 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "bdev_iscsi_set_options", 00:22:44.594 "params": { 00:22:44.594 "timeout_sec": 30 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "bdev_nvme_set_options", 00:22:44.594 "params": { 00:22:44.594 "action_on_timeout": "none", 00:22:44.594 "timeout_us": 0, 00:22:44.594 "timeout_admin_us": 0, 00:22:44.594 "keep_alive_timeout_ms": 10000, 00:22:44.594 "arbitration_burst": 0, 00:22:44.594 "low_priority_weight": 0, 00:22:44.594 "medium_priority_weight": 0, 00:22:44.594 "high_priority_weight": 0, 00:22:44.594 "nvme_adminq_poll_period_us": 10000, 00:22:44.594 "nvme_ioq_poll_period_us": 0, 00:22:44.594 "io_queue_requests": 0, 00:22:44.594 "delay_cmd_submit": true, 00:22:44.594 "transport_retry_count": 4, 00:22:44.594 "bdev_retry_count": 3, 00:22:44.594 "transport_ack_timeout": 0, 00:22:44.594 "ctrlr_loss_timeout_sec": 0, 00:22:44.594 "reconnect_delay_sec": 0, 00:22:44.594 "fast_io_fail_timeout_sec": 0, 00:22:44.594 "disable_auto_failback": false, 00:22:44.594 "generate_uuids": false, 00:22:44.594 "transport_tos": 0, 00:22:44.594 "nvme_error_stat": false, 00:22:44.594 "rdma_srq_size": 0, 00:22:44.594 "io_path_stat": false, 00:22:44.594 "allow_accel_sequence": false, 00:22:44.594 "rdma_max_cq_size": 0, 00:22:44.594 "rdma_cm_event_timeout_ms": 0, 00:22:44.594 "dhchap_digests": [ 00:22:44.594 "sha256", 00:22:44.594 "sha384", 00:22:44.594 "sha512" 00:22:44.594 ], 00:22:44.594 "dhchap_dhgroups": [ 00:22:44.594 "null", 00:22:44.594 "ffdhe2048", 00:22:44.594 "ffdhe3072", 00:22:44.594 "ffdhe4096", 00:22:44.594 "ffdhe6144", 00:22:44.594 "ffdhe8192" 00:22:44.594 ] 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "bdev_nvme_set_hotplug", 00:22:44.594 "params": { 00:22:44.594 "period_us": 100000, 00:22:44.594 "enable": false 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "bdev_malloc_create", 00:22:44.594 "params": { 00:22:44.594 "name": "malloc0", 00:22:44.594 "num_blocks": 8192, 00:22:44.594 "block_size": 4096, 00:22:44.594 "physical_block_size": 4096, 00:22:44.594 "uuid": "9ae4cd43-4069-450d-bfd8-6180953e1490", 00:22:44.594 "optimal_io_boundary": 0, 00:22:44.594 "md_size": 0, 00:22:44.594 "dif_type": 0, 00:22:44.594 "dif_is_head_of_md": false, 00:22:44.594 "dif_pi_format": 0 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "bdev_wait_for_examine" 00:22:44.594 } 00:22:44.594 ] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "nbd", 00:22:44.594 "config": [] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "scheduler", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "framework_set_scheduler", 00:22:44.594 "params": { 00:22:44.594 "name": "static" 00:22:44.594 } 00:22:44.594 } 00:22:44.594 ] 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "subsystem": "nvmf", 00:22:44.594 "config": [ 00:22:44.594 { 00:22:44.594 "method": "nvmf_set_config", 00:22:44.594 "params": { 00:22:44.594 "discovery_filter": "match_any", 00:22:44.594 "admin_cmd_passthru": { 00:22:44.594 "identify_ctrlr": false 00:22:44.594 }, 00:22:44.594 "dhchap_digests": [ 00:22:44.594 "sha256", 00:22:44.594 "sha384", 00:22:44.594 "sha512" 00:22:44.594 ], 00:22:44.594 "dhchap_dhgroups": [ 00:22:44.594 "null", 00:22:44.594 "ffdhe2048", 00:22:44.594 "ffdhe3072", 00:22:44.594 "ffdhe4096", 00:22:44.594 "ffdhe6144", 00:22:44.594 "ffdhe8192" 00:22:44.594 ] 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "nvmf_set_max_subsystems", 00:22:44.594 "params": { 00:22:44.594 "max_subsystems": 1024 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "nvmf_set_crdt", 00:22:44.594 "params": { 00:22:44.594 "crdt1": 0, 00:22:44.594 "crdt2": 0, 00:22:44.594 "crdt3": 0 00:22:44.594 } 00:22:44.594 }, 00:22:44.594 { 00:22:44.594 "method": "nvmf_create_transport", 00:22:44.594 "params": { 00:22:44.594 "trtype": "TCP", 00:22:44.594 "max_queue_depth": 128, 00:22:44.594 "max_io_qpairs_per_ctrlr": 127, 00:22:44.594 "in_capsule_data_size": 4096, 00:22:44.595 "max_io_size": 131072, 00:22:44.595 "io_unit_size": 131072, 00:22:44.595 "max_aq_depth": 128, 00:22:44.595 "num_shared_buffers": 511, 00:22:44.595 "buf_cache_size": 4294967295, 00:22:44.595 "dif_insert_or_strip": false, 00:22:44.595 "zcopy": false, 00:22:44.595 "c2h_success": false, 00:22:44.595 "sock_priority": 0, 00:22:44.595 "abort_timeout_sec": 1, 00:22:44.595 "ack_timeout": 0, 00:22:44.595 "data_wr_pool_size": 0 00:22:44.595 } 00:22:44.595 }, 00:22:44.595 { 00:22:44.595 "method": "nvmf_create_subsystem", 00:22:44.595 "params": { 00:22:44.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.595 "allow_any_host": false, 00:22:44.595 "serial_number": "00000000000000000000", 00:22:44.595 "model_number": "SPDK bdev Controller", 00:22:44.595 "max_namespaces": 32, 00:22:44.595 "min_cntlid": 1, 00:22:44.595 "max_cntlid": 65519, 00:22:44.595 "ana_reporting": false 00:22:44.595 } 00:22:44.595 }, 00:22:44.595 { 00:22:44.595 "method": "nvmf_subsystem_add_host", 00:22:44.595 "params": { 00:22:44.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.595 "host": "nqn.2016-06.io.spdk:host1", 00:22:44.595 "psk": "key0" 00:22:44.595 } 00:22:44.595 }, 00:22:44.595 { 00:22:44.595 "method": "nvmf_subsystem_add_ns", 00:22:44.595 "params": { 00:22:44.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.595 "namespace": { 00:22:44.595 "nsid": 1, 00:22:44.595 "bdev_name": "malloc0", 00:22:44.595 "nguid": "9AE4CD434069450DBFD86180953E1490", 00:22:44.595 "uuid": "9ae4cd43-4069-450d-bfd8-6180953e1490", 00:22:44.595 "no_auto_visible": false 00:22:44.595 } 00:22:44.595 } 00:22:44.595 }, 00:22:44.595 { 00:22:44.595 "method": "nvmf_subsystem_add_listener", 00:22:44.595 "params": { 00:22:44.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.595 "listen_address": { 00:22:44.595 "trtype": "TCP", 00:22:44.595 "adrfam": "IPv4", 00:22:44.595 "traddr": "10.0.0.2", 00:22:44.595 "trsvcid": "4420" 00:22:44.595 }, 00:22:44.595 "secure_channel": false, 00:22:44.595 "sock_impl": "ssl" 00:22:44.595 } 00:22:44.595 } 00:22:44.595 ] 00:22:44.595 } 00:22:44.595 ] 00:22:44.595 }' 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1667477 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1667477 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667477 ']' 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.595 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.595 [2024-12-06 17:38:36.594583] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:44.595 [2024-12-06 17:38:36.594634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.854 [2024-12-06 17:38:36.682797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.854 [2024-12-06 17:38:36.710351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.854 [2024-12-06 17:38:36.710384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.854 [2024-12-06 17:38:36.710390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.854 [2024-12-06 17:38:36.710395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.854 [2024-12-06 17:38:36.710400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.854 [2024-12-06 17:38:36.710870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.854 [2024-12-06 17:38:36.904998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.114 [2024-12-06 17:38:36.937023] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.114 [2024-12-06 17:38:36.937231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1667513 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1667513 /var/tmp/bdevperf.sock 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1667513 ']' 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.373 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:45.632 "subsystems": [ 00:22:45.632 { 00:22:45.632 "subsystem": "keyring", 00:22:45.632 "config": [ 00:22:45.632 { 00:22:45.632 "method": "keyring_file_add_key", 00:22:45.632 "params": { 00:22:45.632 "name": "key0", 00:22:45.632 "path": "/tmp/tmp.SI9CzZBvy3" 00:22:45.632 } 00:22:45.632 } 00:22:45.632 ] 00:22:45.632 }, 00:22:45.632 { 00:22:45.632 "subsystem": "iobuf", 00:22:45.632 "config": [ 00:22:45.632 { 00:22:45.632 "method": "iobuf_set_options", 00:22:45.632 "params": { 00:22:45.632 "small_pool_count": 8192, 00:22:45.632 "large_pool_count": 1024, 00:22:45.632 "small_bufsize": 8192, 00:22:45.632 "large_bufsize": 135168, 00:22:45.632 "enable_numa": false 00:22:45.632 } 00:22:45.632 } 00:22:45.632 ] 00:22:45.632 }, 00:22:45.632 { 00:22:45.632 "subsystem": "sock", 00:22:45.632 "config": [ 00:22:45.632 { 00:22:45.632 "method": "sock_set_default_impl", 00:22:45.632 "params": { 00:22:45.632 "impl_name": "posix" 00:22:45.632 } 00:22:45.632 }, 00:22:45.632 { 00:22:45.632 "method": "sock_impl_set_options", 00:22:45.632 "params": { 00:22:45.632 "impl_name": "ssl", 00:22:45.632 "recv_buf_size": 4096, 00:22:45.632 "send_buf_size": 4096, 00:22:45.632 "enable_recv_pipe": true, 00:22:45.632 "enable_quickack": false, 00:22:45.632 "enable_placement_id": 0, 00:22:45.632 "enable_zerocopy_send_server": true, 00:22:45.632 "enable_zerocopy_send_client": false, 00:22:45.632 "zerocopy_threshold": 0, 00:22:45.632 "tls_version": 0, 00:22:45.632 "enable_ktls": false 00:22:45.632 } 00:22:45.632 }, 00:22:45.632 { 00:22:45.632 "method": "sock_impl_set_options", 00:22:45.632 "params": { 00:22:45.632 "impl_name": "posix", 00:22:45.632 "recv_buf_size": 2097152, 00:22:45.632 "send_buf_size": 2097152, 00:22:45.632 "enable_recv_pipe": true, 00:22:45.632 "enable_quickack": false, 00:22:45.632 "enable_placement_id": 0, 00:22:45.632 "enable_zerocopy_send_server": true, 00:22:45.632 "enable_zerocopy_send_client": false, 00:22:45.632 "zerocopy_threshold": 0, 00:22:45.632 "tls_version": 0, 00:22:45.632 "enable_ktls": false 00:22:45.632 } 00:22:45.632 } 00:22:45.632 ] 00:22:45.632 }, 00:22:45.632 { 00:22:45.632 "subsystem": "vmd", 00:22:45.632 "config": [] 00:22:45.632 }, 00:22:45.632 { 00:22:45.632 "subsystem": "accel", 00:22:45.632 "config": [ 00:22:45.632 { 00:22:45.633 "method": "accel_set_options", 00:22:45.633 "params": { 00:22:45.633 "small_cache_size": 128, 00:22:45.633 "large_cache_size": 16, 00:22:45.633 "task_count": 2048, 00:22:45.633 "sequence_count": 2048, 00:22:45.633 "buf_count": 2048 00:22:45.633 } 00:22:45.633 } 00:22:45.633 ] 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "subsystem": "bdev", 00:22:45.633 "config": [ 00:22:45.633 { 00:22:45.633 "method": "bdev_set_options", 00:22:45.633 "params": { 00:22:45.633 "bdev_io_pool_size": 65535, 00:22:45.633 "bdev_io_cache_size": 256, 00:22:45.633 "bdev_auto_examine": true, 00:22:45.633 "iobuf_small_cache_size": 128, 00:22:45.633 "iobuf_large_cache_size": 16 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_raid_set_options", 00:22:45.633 "params": { 00:22:45.633 "process_window_size_kb": 1024, 00:22:45.633 "process_max_bandwidth_mb_sec": 0 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_iscsi_set_options", 00:22:45.633 "params": { 00:22:45.633 "timeout_sec": 30 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_nvme_set_options", 00:22:45.633 "params": { 00:22:45.633 "action_on_timeout": "none", 00:22:45.633 "timeout_us": 0, 00:22:45.633 "timeout_admin_us": 0, 00:22:45.633 "keep_alive_timeout_ms": 10000, 00:22:45.633 "arbitration_burst": 0, 00:22:45.633 "low_priority_weight": 0, 00:22:45.633 "medium_priority_weight": 0, 00:22:45.633 "high_priority_weight": 0, 00:22:45.633 "nvme_adminq_poll_period_us": 10000, 00:22:45.633 "nvme_ioq_poll_period_us": 0, 00:22:45.633 "io_queue_requests": 512, 00:22:45.633 "delay_cmd_submit": true, 00:22:45.633 "transport_retry_count": 4, 00:22:45.633 "bdev_retry_count": 3, 00:22:45.633 "transport_ack_timeout": 0, 00:22:45.633 "ctrlr_loss_timeout_sec": 0, 00:22:45.633 "reconnect_delay_sec": 0, 00:22:45.633 "fast_io_fail_timeout_sec": 0, 00:22:45.633 "disable_auto_failback": false, 00:22:45.633 "generate_uuids": false, 00:22:45.633 "transport_tos": 0, 00:22:45.633 "nvme_error_stat": false, 00:22:45.633 "rdma_srq_size": 0, 00:22:45.633 "io_path_stat": false, 00:22:45.633 "allow_accel_sequence": false, 00:22:45.633 "rdma_max_cq_size": 0, 00:22:45.633 "rdma_cm_event_timeout_ms": 0, 00:22:45.633 "dhchap_digests": [ 00:22:45.633 "sha256", 00:22:45.633 "sha384", 00:22:45.633 "sha512" 00:22:45.633 ], 00:22:45.633 "dhchap_dhgroups": [ 00:22:45.633 "null", 00:22:45.633 "ffdhe2048", 00:22:45.633 "ffdhe3072", 00:22:45.633 "ffdhe4096", 00:22:45.633 "ffdhe6144", 00:22:45.633 "ffdhe8192" 00:22:45.633 ] 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_nvme_attach_controller", 00:22:45.633 "params": { 00:22:45.633 "name": "nvme0", 00:22:45.633 "trtype": "TCP", 00:22:45.633 "adrfam": "IPv4", 00:22:45.633 "traddr": "10.0.0.2", 00:22:45.633 "trsvcid": "4420", 00:22:45.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.633 "prchk_reftag": false, 00:22:45.633 "prchk_guard": false, 00:22:45.633 "ctrlr_loss_timeout_sec": 0, 00:22:45.633 "reconnect_delay_sec": 0, 00:22:45.633 "fast_io_fail_timeout_sec": 0, 00:22:45.633 "psk": "key0", 00:22:45.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.633 "hdgst": false, 00:22:45.633 "ddgst": false, 00:22:45.633 "multipath": "multipath" 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_nvme_set_hotplug", 00:22:45.633 "params": { 00:22:45.633 "period_us": 100000, 00:22:45.633 "enable": false 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_enable_histogram", 00:22:45.633 "params": { 00:22:45.633 "name": "nvme0n1", 00:22:45.633 "enable": true 00:22:45.633 } 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "method": "bdev_wait_for_examine" 00:22:45.633 } 00:22:45.633 ] 00:22:45.633 }, 00:22:45.633 { 00:22:45.633 "subsystem": "nbd", 00:22:45.633 "config": [] 00:22:45.633 } 00:22:45.633 ] 00:22:45.633 }' 00:22:45.633 [2024-12-06 17:38:37.495517] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:45.633 [2024-12-06 17:38:37.495585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667513 ] 00:22:45.633 [2024-12-06 17:38:37.579885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.633 [2024-12-06 17:38:37.609667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.893 [2024-12-06 17:38:37.745529] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.477 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.477 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.477 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.477 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:46.477 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.477 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.737 Running I/O for 1 seconds... 00:22:47.676 5410.00 IOPS, 21.13 MiB/s 00:22:47.676 Latency(us) 00:22:47.676 [2024-12-06T16:38:39.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.676 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.676 Verification LBA range: start 0x0 length 0x2000 00:22:47.676 nvme0n1 : 1.01 5473.16 21.38 0.00 0.00 23242.56 4669.44 25012.91 00:22:47.676 [2024-12-06T16:38:39.742Z] =================================================================================================================== 00:22:47.676 [2024-12-06T16:38:39.742Z] Total : 5473.16 21.38 0.00 0.00 23242.56 4669.44 25012.91 00:22:47.676 { 00:22:47.676 "results": [ 00:22:47.676 { 00:22:47.676 "job": "nvme0n1", 00:22:47.676 "core_mask": "0x2", 00:22:47.676 "workload": "verify", 00:22:47.676 "status": "finished", 00:22:47.676 "verify_range": { 00:22:47.676 "start": 0, 00:22:47.676 "length": 8192 00:22:47.676 }, 00:22:47.676 "queue_depth": 128, 00:22:47.676 "io_size": 4096, 00:22:47.676 "runtime": 1.011846, 00:22:47.676 "iops": 5473.164888728126, 00:22:47.676 "mibps": 21.379550346594243, 00:22:47.676 "io_failed": 0, 00:22:47.676 "io_timeout": 0, 00:22:47.676 "avg_latency_us": 23242.563890694593, 00:22:47.676 "min_latency_us": 4669.44, 00:22:47.676 "max_latency_us": 25012.906666666666 00:22:47.676 } 00:22:47.676 ], 00:22:47.676 "core_count": 1 00:22:47.676 } 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:47.676 nvmf_trace.0 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1667513 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667513 ']' 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667513 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.676 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667513 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667513' 00:22:47.937 killing process with pid 1667513 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667513 00:22:47.937 Received shutdown signal, test time was about 1.000000 seconds 00:22:47.937 00:22:47.937 Latency(us) 00:22:47.937 [2024-12-06T16:38:40.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.937 [2024-12-06T16:38:40.003Z] =================================================================================================================== 00:22:47.937 [2024-12-06T16:38:40.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667513 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.937 rmmod nvme_tcp 00:22:47.937 rmmod nvme_fabrics 00:22:47.937 rmmod nvme_keyring 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1667477 ']' 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1667477 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1667477 ']' 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1667477 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667477 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667477' 00:22:47.937 killing process with pid 1667477 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1667477 00:22:47.937 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1667477 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.197 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zqgP2MhczP /tmp/tmp.hsaaxWXvV6 /tmp/tmp.SI9CzZBvy3 00:22:50.740 00:22:50.740 real 1m28.144s 00:22:50.740 user 2m19.627s 00:22:50.740 sys 0m27.295s 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.740 ************************************ 00:22:50.740 END TEST nvmf_tls 00:22:50.740 ************************************ 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.740 ************************************ 00:22:50.740 START TEST nvmf_fips 00:22:50.740 ************************************ 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:50.740 * Looking for test storage... 00:22:50.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.740 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:50.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.741 --rc genhtml_branch_coverage=1 00:22:50.741 --rc genhtml_function_coverage=1 00:22:50.741 --rc genhtml_legend=1 00:22:50.741 --rc geninfo_all_blocks=1 00:22:50.741 --rc geninfo_unexecuted_blocks=1 00:22:50.741 00:22:50.741 ' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:50.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.741 --rc genhtml_branch_coverage=1 00:22:50.741 --rc genhtml_function_coverage=1 00:22:50.741 --rc genhtml_legend=1 00:22:50.741 --rc geninfo_all_blocks=1 00:22:50.741 --rc geninfo_unexecuted_blocks=1 00:22:50.741 00:22:50.741 ' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:50.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.741 --rc genhtml_branch_coverage=1 00:22:50.741 --rc genhtml_function_coverage=1 00:22:50.741 --rc genhtml_legend=1 00:22:50.741 --rc geninfo_all_blocks=1 00:22:50.741 --rc geninfo_unexecuted_blocks=1 00:22:50.741 00:22:50.741 ' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:50.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.741 --rc genhtml_branch_coverage=1 00:22:50.741 --rc genhtml_function_coverage=1 00:22:50.741 --rc genhtml_legend=1 00:22:50.741 --rc geninfo_all_blocks=1 00:22:50.741 --rc geninfo_unexecuted_blocks=1 00:22:50.741 00:22:50.741 ' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.741 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:50.742 Error setting digest 00:22:50.742 40424CDD2F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:50.742 40424CDD2F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.742 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:58.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:58.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:58.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:58.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.870 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:22:58.870 00:22:58.870 --- 10.0.0.2 ping statistics --- 00:22:58.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.871 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:22:58.871 00:22:58.871 --- 10.0.0.1 ping statistics --- 00:22:58.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.871 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1670029 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1670029 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1670029 ']' 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.871 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:58.871 [2024-12-06 17:38:50.145022] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:58.871 [2024-12-06 17:38:50.145094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.871 [2024-12-06 17:38:50.245066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.871 [2024-12-06 17:38:50.295020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.871 [2024-12-06 17:38:50.295077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.871 [2024-12-06 17:38:50.295086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.871 [2024-12-06 17:38:50.295093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.871 [2024-12-06 17:38:50.295099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.871 [2024-12-06 17:38:50.295871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:59.130 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N5A 00:22:59.130 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:59.130 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N5A 00:22:59.130 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N5A 00:22:59.130 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N5A 00:22:59.130 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:59.130 [2024-12-06 17:38:51.171719] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.130 [2024-12-06 17:38:51.187712] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.130 [2024-12-06 17:38:51.188003] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.389 malloc0 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1670069 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1670069 /var/tmp/bdevperf.sock 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1670069 ']' 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.389 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.389 [2024-12-06 17:38:51.342860] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:22:59.389 [2024-12-06 17:38:51.342935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670069 ] 00:22:59.389 [2024-12-06 17:38:51.437409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.648 [2024-12-06 17:38:51.487958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.218 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.218 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:00.218 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N5A 00:23:00.478 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.478 [2024-12-06 17:38:52.431579] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.478 TLSTESTn1 00:23:00.478 17:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:00.738 Running I/O for 10 seconds... 00:23:02.732 3851.00 IOPS, 15.04 MiB/s [2024-12-06T16:38:55.734Z] 4349.00 IOPS, 16.99 MiB/s [2024-12-06T16:38:56.679Z] 4871.00 IOPS, 19.03 MiB/s [2024-12-06T16:38:57.618Z] 4952.00 IOPS, 19.34 MiB/s [2024-12-06T16:38:58.998Z] 4980.20 IOPS, 19.45 MiB/s [2024-12-06T16:38:59.938Z] 5063.17 IOPS, 19.78 MiB/s [2024-12-06T16:39:00.876Z] 5147.29 IOPS, 20.11 MiB/s [2024-12-06T16:39:01.815Z] 5207.88 IOPS, 20.34 MiB/s [2024-12-06T16:39:02.884Z] 5182.33 IOPS, 20.24 MiB/s [2024-12-06T16:39:02.884Z] 5228.20 IOPS, 20.42 MiB/s 00:23:10.818 Latency(us) 00:23:10.818 [2024-12-06T16:39:02.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.818 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.818 Verification LBA range: start 0x0 length 0x2000 00:23:10.818 TLSTESTn1 : 10.02 5230.20 20.43 0.00 0.00 24428.55 5843.63 79953.92 00:23:10.818 [2024-12-06T16:39:02.884Z] =================================================================================================================== 00:23:10.818 [2024-12-06T16:39:02.884Z] Total : 5230.20 20.43 0.00 0.00 24428.55 5843.63 79953.92 00:23:10.818 { 00:23:10.818 "results": [ 00:23:10.818 { 00:23:10.818 "job": "TLSTESTn1", 00:23:10.818 "core_mask": "0x4", 00:23:10.818 "workload": "verify", 00:23:10.818 "status": "finished", 00:23:10.818 "verify_range": { 00:23:10.818 "start": 0, 00:23:10.818 "length": 8192 00:23:10.818 }, 00:23:10.818 "queue_depth": 128, 00:23:10.818 "io_size": 4096, 00:23:10.818 "runtime": 10.020653, 00:23:10.818 "iops": 5230.198071922059, 00:23:10.818 "mibps": 20.430461218445544, 00:23:10.818 "io_failed": 0, 00:23:10.818 "io_timeout": 0, 00:23:10.818 "avg_latency_us": 24428.548689181454, 00:23:10.818 "min_latency_us": 5843.626666666667, 00:23:10.818 "max_latency_us": 79953.92 00:23:10.818 } 00:23:10.818 ], 00:23:10.818 "core_count": 1 00:23:10.818 } 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:10.818 nvmf_trace.0 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1670069 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1670069 ']' 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1670069 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670069 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670069' 00:23:10.818 killing process with pid 1670069 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1670069 00:23:10.818 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.818 00:23:10.818 Latency(us) 00:23:10.818 [2024-12-06T16:39:02.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.818 [2024-12-06T16:39:02.884Z] =================================================================================================================== 00:23:10.818 [2024-12-06T16:39:02.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.818 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1670069 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.079 rmmod nvme_tcp 00:23:11.079 rmmod nvme_fabrics 00:23:11.079 rmmod nvme_keyring 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1670029 ']' 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1670029 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1670029 ']' 00:23:11.079 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1670029 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670029 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670029' 00:23:11.079 killing process with pid 1670029 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1670029 00:23:11.079 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1670029 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.339 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N5A 00:23:13.247 00:23:13.247 real 0m22.989s 00:23:13.247 user 0m24.708s 00:23:13.247 sys 0m9.490s 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.247 ************************************ 00:23:13.247 END TEST nvmf_fips 00:23:13.247 ************************************ 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.247 17:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:13.506 ************************************ 00:23:13.506 START TEST nvmf_control_msg_list 00:23:13.506 ************************************ 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:13.506 * Looking for test storage... 00:23:13.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.506 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.507 --rc genhtml_branch_coverage=1 00:23:13.507 --rc genhtml_function_coverage=1 00:23:13.507 --rc genhtml_legend=1 00:23:13.507 --rc geninfo_all_blocks=1 00:23:13.507 --rc geninfo_unexecuted_blocks=1 00:23:13.507 00:23:13.507 ' 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.507 --rc genhtml_branch_coverage=1 00:23:13.507 --rc genhtml_function_coverage=1 00:23:13.507 --rc genhtml_legend=1 00:23:13.507 --rc geninfo_all_blocks=1 00:23:13.507 --rc geninfo_unexecuted_blocks=1 00:23:13.507 00:23:13.507 ' 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.507 --rc genhtml_branch_coverage=1 00:23:13.507 --rc genhtml_function_coverage=1 00:23:13.507 --rc genhtml_legend=1 00:23:13.507 --rc geninfo_all_blocks=1 00:23:13.507 --rc geninfo_unexecuted_blocks=1 00:23:13.507 00:23:13.507 ' 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:13.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.507 --rc genhtml_branch_coverage=1 00:23:13.507 --rc genhtml_function_coverage=1 00:23:13.507 --rc genhtml_legend=1 00:23:13.507 --rc geninfo_all_blocks=1 00:23:13.507 --rc geninfo_unexecuted_blocks=1 00:23:13.507 00:23:13.507 ' 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.507 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:13.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.767 17:39:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:21.898 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.898 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.898 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.898 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:21.899 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:21.899 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:21.899 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:21.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.899 17:39:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.899 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.899 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.899 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.899 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:23:21.899 00:23:21.899 --- 10.0.0.2 ping statistics --- 00:23:21.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.899 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:23:21.899 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:23:21.900 00:23:21.900 --- 10.0.0.1 ping statistics --- 00:23:21.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.900 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1673239 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1673239 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1673239 ']' 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.900 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:21.900 [2024-12-06 17:39:13.155925] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:23:21.900 [2024-12-06 17:39:13.155996] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.900 [2024-12-06 17:39:13.256615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.900 [2024-12-06 17:39:13.307534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.900 [2024-12-06 17:39:13.307583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.900 [2024-12-06 17:39:13.307592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.900 [2024-12-06 17:39:13.307601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.900 [2024-12-06 17:39:13.307607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.900 [2024-12-06 17:39:13.308340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.160 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.160 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:22.160 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.160 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.160 17:39:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 [2024-12-06 17:39:14.031758] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:22.160 Malloc0 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:22.160 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:22.161 [2024-12-06 17:39:14.086194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1673275 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1673276 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1673277 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1673275 00:23:22.161 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:22.161 [2024-12-06 17:39:14.186796] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:22.161 [2024-12-06 17:39:14.197068] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:22.161 [2024-12-06 17:39:14.197375] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:23.542 Initializing NVMe Controllers 00:23:23.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:23.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:23.542 Initialization complete. Launching workers. 00:23:23.542 ======================================================== 00:23:23.542 Latency(us) 00:23:23.542 Device Information : IOPS MiB/s Average min max 00:23:23.542 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 219.00 0.86 4614.80 249.91 42000.81 00:23:23.542 ======================================================== 00:23:23.542 Total : 219.00 0.86 4614.80 249.91 42000.81 00:23:23.542 00:23:23.542 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1673276 00:23:23.542 Initializing NVMe Controllers 00:23:23.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:23.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:23.542 Initialization complete. Launching workers. 00:23:23.542 ======================================================== 00:23:23.542 Latency(us) 00:23:23.542 Device Information : IOPS MiB/s Average min max 00:23:23.543 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1503.00 5.87 665.27 164.25 824.60 00:23:23.543 ======================================================== 00:23:23.543 Total : 1503.00 5.87 665.27 164.25 824.60 00:23:23.543 00:23:23.543 [2024-12-06 17:39:15.310680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863480 is same with the state(6) to be set 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1673277 00:23:23.543 Initializing NVMe Controllers 00:23:23.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:23.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:23.543 Initialization complete. Launching workers. 00:23:23.543 ======================================================== 00:23:23.543 Latency(us) 00:23:23.543 Device Information : IOPS MiB/s Average min max 00:23:23.543 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1462.00 5.71 684.07 285.86 863.29 00:23:23.543 ======================================================== 00:23:23.543 Total : 1462.00 5.71 684.07 285.86 863.29 00:23:23.543 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.543 rmmod nvme_tcp 00:23:23.543 rmmod nvme_fabrics 00:23:23.543 rmmod nvme_keyring 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1673239 ']' 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1673239 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1673239 ']' 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1673239 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1673239 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1673239' 00:23:23.543 killing process with pid 1673239 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1673239 00:23:23.543 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1673239 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.802 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.711 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.711 00:23:25.711 real 0m12.402s 00:23:25.711 user 0m7.861s 00:23:25.711 sys 0m6.617s 00:23:25.711 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.711 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:25.711 ************************************ 00:23:25.711 END TEST nvmf_control_msg_list 00:23:25.711 ************************************ 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:25.971 ************************************ 00:23:25.971 START TEST nvmf_wait_for_buf 00:23:25.971 ************************************ 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:25.971 * Looking for test storage... 00:23:25.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:25.971 17:39:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.971 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.232 --rc genhtml_branch_coverage=1 00:23:26.232 --rc genhtml_function_coverage=1 00:23:26.232 --rc genhtml_legend=1 00:23:26.232 --rc geninfo_all_blocks=1 00:23:26.232 --rc geninfo_unexecuted_blocks=1 00:23:26.232 00:23:26.232 ' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.232 --rc genhtml_branch_coverage=1 00:23:26.232 --rc genhtml_function_coverage=1 00:23:26.232 --rc genhtml_legend=1 00:23:26.232 --rc geninfo_all_blocks=1 00:23:26.232 --rc geninfo_unexecuted_blocks=1 00:23:26.232 00:23:26.232 ' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.232 --rc genhtml_branch_coverage=1 00:23:26.232 --rc genhtml_function_coverage=1 00:23:26.232 --rc genhtml_legend=1 00:23:26.232 --rc geninfo_all_blocks=1 00:23:26.232 --rc geninfo_unexecuted_blocks=1 00:23:26.232 00:23:26.232 ' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.232 --rc genhtml_branch_coverage=1 00:23:26.232 --rc genhtml_function_coverage=1 00:23:26.232 --rc genhtml_legend=1 00:23:26.232 --rc geninfo_all_blocks=1 00:23:26.232 --rc geninfo_unexecuted_blocks=1 00:23:26.232 00:23:26.232 ' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.232 17:39:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.366 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:23:34.367 00:23:34.367 --- 10.0.0.2 ping statistics --- 00:23:34.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.367 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:23:34.367 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:23:34.367 00:23:34.368 --- 10.0.0.1 ping statistics --- 00:23:34.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.368 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1675739 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1675739 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1675739 ']' 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.368 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 [2024-12-06 17:39:25.562783] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:23:34.368 [2024-12-06 17:39:25.562849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.368 [2024-12-06 17:39:25.662242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.368 [2024-12-06 17:39:25.712754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.368 [2024-12-06 17:39:25.712807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.368 [2024-12-06 17:39:25.712815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.368 [2024-12-06 17:39:25.712822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.368 [2024-12-06 17:39:25.712828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.368 [2024-12-06 17:39:25.713577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 Malloc0 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 [2024-12-06 17:39:26.518895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 [2024-12-06 17:39:26.543181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.631 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.631 [2024-12-06 17:39:26.646736] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:36.016 Initializing NVMe Controllers 00:23:36.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:36.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:36.016 Initialization complete. Launching workers. 00:23:36.016 ======================================================== 00:23:36.016 Latency(us) 00:23:36.016 Device Information : IOPS MiB/s Average min max 00:23:36.016 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32263.88 8008.62 63856.42 00:23:36.016 ======================================================== 00:23:36.016 Total : 129.00 16.12 32263.88 8008.62 63856.42 00:23:36.016 00:23:36.016 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:36.016 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:36.016 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.016 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:36.016 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.277 rmmod nvme_tcp 00:23:36.277 rmmod nvme_fabrics 00:23:36.277 rmmod nvme_keyring 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1675739 ']' 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1675739 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1675739 ']' 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1675739 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1675739 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1675739' 00:23:36.277 killing process with pid 1675739 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1675739 00:23:36.277 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1675739 00:23:36.538 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.538 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.538 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.538 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:36.538 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.539 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.450 00:23:38.450 real 0m12.637s 00:23:38.450 user 0m5.025s 00:23:38.450 sys 0m6.199s 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:38.450 ************************************ 00:23:38.450 END TEST nvmf_wait_for_buf 00:23:38.450 ************************************ 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:38.450 17:39:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:38.451 17:39:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.451 17:39:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:46.589 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:46.589 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:46.589 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:46.589 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.589 ************************************ 00:23:46.589 START TEST nvmf_perf_adq 00:23:46.589 ************************************ 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:46.589 * Looking for test storage... 00:23:46.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.589 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.590 --rc genhtml_branch_coverage=1 00:23:46.590 --rc genhtml_function_coverage=1 00:23:46.590 --rc genhtml_legend=1 00:23:46.590 --rc geninfo_all_blocks=1 00:23:46.590 --rc geninfo_unexecuted_blocks=1 00:23:46.590 00:23:46.590 ' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.590 --rc genhtml_branch_coverage=1 00:23:46.590 --rc genhtml_function_coverage=1 00:23:46.590 --rc genhtml_legend=1 00:23:46.590 --rc geninfo_all_blocks=1 00:23:46.590 --rc geninfo_unexecuted_blocks=1 00:23:46.590 00:23:46.590 ' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.590 --rc genhtml_branch_coverage=1 00:23:46.590 --rc genhtml_function_coverage=1 00:23:46.590 --rc genhtml_legend=1 00:23:46.590 --rc geninfo_all_blocks=1 00:23:46.590 --rc geninfo_unexecuted_blocks=1 00:23:46.590 00:23:46.590 ' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.590 --rc genhtml_branch_coverage=1 00:23:46.590 --rc genhtml_function_coverage=1 00:23:46.590 --rc genhtml_legend=1 00:23:46.590 --rc geninfo_all_blocks=1 00:23:46.590 --rc geninfo_unexecuted_blocks=1 00:23:46.590 00:23:46.590 ' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.590 17:39:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:53.180 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:53.180 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:53.180 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:53.180 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:53.180 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:54.564 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:56.479 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.768 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:01.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:01.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:01.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:01.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:24:01.769 00:24:01.769 --- 10.0.0.2 ping statistics --- 00:24:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.769 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:01.769 00:24:01.769 --- 10.0.0.1 ping statistics --- 00:24:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.769 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1680981 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1680981 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1680981 ']' 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.769 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.770 17:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.032 [2024-12-06 17:39:53.846210] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:02.032 [2024-12-06 17:39:53.846273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.032 [2024-12-06 17:39:53.945925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.032 [2024-12-06 17:39:54.000036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.032 [2024-12-06 17:39:54.000090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.032 [2024-12-06 17:39:54.000099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.032 [2024-12-06 17:39:54.000111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.032 [2024-12-06 17:39:54.000117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.032 [2024-12-06 17:39:54.002129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.032 [2024-12-06 17:39:54.002290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.032 [2024-12-06 17:39:54.002454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.032 [2024-12-06 17:39:54.002455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 [2024-12-06 17:39:54.872095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 Malloc1 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.976 [2024-12-06 17:39:54.945707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1681021 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:02.976 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:05.520 "tick_rate": 2400000000, 00:24:05.520 "poll_groups": [ 00:24:05.520 { 00:24:05.520 "name": "nvmf_tgt_poll_group_000", 00:24:05.520 "admin_qpairs": 1, 00:24:05.520 "io_qpairs": 1, 00:24:05.520 "current_admin_qpairs": 1, 00:24:05.520 "current_io_qpairs": 1, 00:24:05.520 "pending_bdev_io": 0, 00:24:05.520 "completed_nvme_io": 19289, 00:24:05.520 "transports": [ 00:24:05.520 { 00:24:05.520 "trtype": "TCP" 00:24:05.520 } 00:24:05.520 ] 00:24:05.520 }, 00:24:05.520 { 00:24:05.520 "name": "nvmf_tgt_poll_group_001", 00:24:05.520 "admin_qpairs": 0, 00:24:05.520 "io_qpairs": 1, 00:24:05.520 "current_admin_qpairs": 0, 00:24:05.520 "current_io_qpairs": 1, 00:24:05.520 "pending_bdev_io": 0, 00:24:05.520 "completed_nvme_io": 20190, 00:24:05.520 "transports": [ 00:24:05.520 { 00:24:05.520 "trtype": "TCP" 00:24:05.520 } 00:24:05.520 ] 00:24:05.520 }, 00:24:05.520 { 00:24:05.520 "name": "nvmf_tgt_poll_group_002", 00:24:05.520 "admin_qpairs": 0, 00:24:05.520 "io_qpairs": 1, 00:24:05.520 "current_admin_qpairs": 0, 00:24:05.520 "current_io_qpairs": 1, 00:24:05.520 "pending_bdev_io": 0, 00:24:05.520 "completed_nvme_io": 20315, 00:24:05.520 "transports": [ 00:24:05.520 { 00:24:05.520 "trtype": "TCP" 00:24:05.520 } 00:24:05.520 ] 00:24:05.520 }, 00:24:05.520 { 00:24:05.520 "name": "nvmf_tgt_poll_group_003", 00:24:05.520 "admin_qpairs": 0, 00:24:05.520 "io_qpairs": 1, 00:24:05.520 "current_admin_qpairs": 0, 00:24:05.520 "current_io_qpairs": 1, 00:24:05.520 "pending_bdev_io": 0, 00:24:05.520 "completed_nvme_io": 17519, 00:24:05.520 "transports": [ 00:24:05.520 { 00:24:05.520 "trtype": "TCP" 00:24:05.520 } 00:24:05.520 ] 00:24:05.520 } 00:24:05.520 ] 00:24:05.520 }' 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:05.520 17:39:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:05.520 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:05.520 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:05.520 17:39:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1681021 00:24:13.759 Initializing NVMe Controllers 00:24:13.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:13.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:13.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:13.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:13.759 Initialization complete. Launching workers. 00:24:13.759 ======================================================== 00:24:13.759 Latency(us) 00:24:13.759 Device Information : IOPS MiB/s Average min max 00:24:13.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12965.00 50.64 4937.38 1299.39 12492.09 00:24:13.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13683.10 53.45 4676.86 1268.37 13533.83 00:24:13.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13558.70 52.96 4719.69 1505.48 13674.71 00:24:13.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13086.90 51.12 4889.62 1246.73 13529.28 00:24:13.759 ======================================================== 00:24:13.759 Total : 53293.69 208.18 4803.38 1246.73 13674.71 00:24:13.759 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.759 rmmod nvme_tcp 00:24:13.759 rmmod nvme_fabrics 00:24:13.759 rmmod nvme_keyring 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1680981 ']' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1680981 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1680981 ']' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1680981 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1680981 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1680981' 00:24:13.759 killing process with pid 1680981 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1680981 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1680981 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.759 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.669 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.669 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:15.669 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:15.669 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:17.052 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:18.973 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:24.261 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.261 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:24.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:24.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:24.262 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.262 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:24:24.262 00:24:24.262 --- 10.0.0.2 ping statistics --- 00:24:24.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.262 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:24:24.262 00:24:24.262 --- 10.0.0.1 ping statistics --- 00:24:24.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.262 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:24.262 net.core.busy_poll = 1 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:24.262 net.core.busy_read = 1 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:24.262 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1681736 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1681736 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1681736 ']' 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.523 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.783 [2024-12-06 17:40:16.617671] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:24.783 [2024-12-06 17:40:16.617742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.783 [2024-12-06 17:40:16.715761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.783 [2024-12-06 17:40:16.768403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.783 [2024-12-06 17:40:16.768458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.783 [2024-12-06 17:40:16.768467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.783 [2024-12-06 17:40:16.768474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.783 [2024-12-06 17:40:16.768481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.783 [2024-12-06 17:40:16.770464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.783 [2024-12-06 17:40:16.770625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.783 [2024-12-06 17:40:16.770786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.783 [2024-12-06 17:40:16.770925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.724 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.724 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:25.724 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.724 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.724 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.724 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 [2024-12-06 17:40:17.644527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 Malloc1 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.725 [2024-12-06 17:40:17.717444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1681773 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:25.725 17:40:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:28.265 "tick_rate": 2400000000, 00:24:28.265 "poll_groups": [ 00:24:28.265 { 00:24:28.265 "name": "nvmf_tgt_poll_group_000", 00:24:28.265 "admin_qpairs": 1, 00:24:28.265 "io_qpairs": 4, 00:24:28.265 "current_admin_qpairs": 1, 00:24:28.265 "current_io_qpairs": 4, 00:24:28.265 "pending_bdev_io": 0, 00:24:28.265 "completed_nvme_io": 40251, 00:24:28.265 "transports": [ 00:24:28.265 { 00:24:28.265 "trtype": "TCP" 00:24:28.265 } 00:24:28.265 ] 00:24:28.265 }, 00:24:28.265 { 00:24:28.265 "name": "nvmf_tgt_poll_group_001", 00:24:28.265 "admin_qpairs": 0, 00:24:28.265 "io_qpairs": 0, 00:24:28.265 "current_admin_qpairs": 0, 00:24:28.265 "current_io_qpairs": 0, 00:24:28.265 "pending_bdev_io": 0, 00:24:28.265 "completed_nvme_io": 0, 00:24:28.265 "transports": [ 00:24:28.265 { 00:24:28.265 "trtype": "TCP" 00:24:28.265 } 00:24:28.265 ] 00:24:28.265 }, 00:24:28.265 { 00:24:28.265 "name": "nvmf_tgt_poll_group_002", 00:24:28.265 "admin_qpairs": 0, 00:24:28.265 "io_qpairs": 0, 00:24:28.265 "current_admin_qpairs": 0, 00:24:28.265 "current_io_qpairs": 0, 00:24:28.265 "pending_bdev_io": 0, 00:24:28.265 "completed_nvme_io": 0, 00:24:28.265 "transports": [ 00:24:28.265 { 00:24:28.265 "trtype": "TCP" 00:24:28.265 } 00:24:28.265 ] 00:24:28.265 }, 00:24:28.265 { 00:24:28.265 "name": "nvmf_tgt_poll_group_003", 00:24:28.265 "admin_qpairs": 0, 00:24:28.265 "io_qpairs": 0, 00:24:28.265 "current_admin_qpairs": 0, 00:24:28.265 "current_io_qpairs": 0, 00:24:28.265 "pending_bdev_io": 0, 00:24:28.265 "completed_nvme_io": 0, 00:24:28.265 "transports": [ 00:24:28.265 { 00:24:28.265 "trtype": "TCP" 00:24:28.265 } 00:24:28.265 ] 00:24:28.265 } 00:24:28.265 ] 00:24:28.265 }' 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:24:28.265 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1681773 00:24:36.401 Initializing NVMe Controllers 00:24:36.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:36.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:36.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:36.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:36.401 Initialization complete. Launching workers. 00:24:36.401 ======================================================== 00:24:36.401 Latency(us) 00:24:36.401 Device Information : IOPS MiB/s Average min max 00:24:36.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6694.30 26.15 9560.62 1282.09 60311.85 00:24:36.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6947.10 27.14 9213.55 1158.95 60667.02 00:24:36.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6139.10 23.98 10456.15 1152.45 56063.56 00:24:36.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5649.80 22.07 11329.79 1149.33 62670.34 00:24:36.402 ======================================================== 00:24:36.402 Total : 25430.30 99.34 10075.05 1149.33 62670.34 00:24:36.402 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.402 rmmod nvme_tcp 00:24:36.402 rmmod nvme_fabrics 00:24:36.402 rmmod nvme_keyring 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1681736 ']' 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1681736 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1681736 ']' 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1681736 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.402 17:40:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1681736 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1681736' 00:24:36.402 killing process with pid 1681736 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1681736 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1681736 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.402 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.701 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:39.702 00:24:39.702 real 0m53.641s 00:24:39.702 user 2m50.444s 00:24:39.702 sys 0m11.138s 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 ************************************ 00:24:39.702 END TEST nvmf_perf_adq 00:24:39.702 ************************************ 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 ************************************ 00:24:39.702 START TEST nvmf_shutdown 00:24:39.702 ************************************ 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:39.702 * Looking for test storage... 00:24:39.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:39.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.702 --rc genhtml_branch_coverage=1 00:24:39.702 --rc genhtml_function_coverage=1 00:24:39.702 --rc genhtml_legend=1 00:24:39.702 --rc geninfo_all_blocks=1 00:24:39.702 --rc geninfo_unexecuted_blocks=1 00:24:39.702 00:24:39.702 ' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:39.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.702 --rc genhtml_branch_coverage=1 00:24:39.702 --rc genhtml_function_coverage=1 00:24:39.702 --rc genhtml_legend=1 00:24:39.702 --rc geninfo_all_blocks=1 00:24:39.702 --rc geninfo_unexecuted_blocks=1 00:24:39.702 00:24:39.702 ' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:39.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.702 --rc genhtml_branch_coverage=1 00:24:39.702 --rc genhtml_function_coverage=1 00:24:39.702 --rc genhtml_legend=1 00:24:39.702 --rc geninfo_all_blocks=1 00:24:39.702 --rc geninfo_unexecuted_blocks=1 00:24:39.702 00:24:39.702 ' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:39.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.702 --rc genhtml_branch_coverage=1 00:24:39.702 --rc genhtml_function_coverage=1 00:24:39.702 --rc genhtml_legend=1 00:24:39.702 --rc geninfo_all_blocks=1 00:24:39.702 --rc geninfo_unexecuted_blocks=1 00:24:39.702 00:24:39.702 ' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.702 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:39.703 ************************************ 00:24:39.703 START TEST nvmf_shutdown_tc1 00:24:39.703 ************************************ 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.703 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:47.840 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:47.840 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:47.840 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:47.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.840 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.841 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:24:47.841 00:24:47.841 --- 10.0.0.2 ping statistics --- 00:24:47.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.841 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:24:47.841 00:24:47.841 --- 10.0.0.1 ping statistics --- 00:24:47.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.841 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1684479 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1684479 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1684479 ']' 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.841 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:47.841 [2024-12-06 17:40:39.135260] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:47.841 [2024-12-06 17:40:39.135326] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.841 [2024-12-06 17:40:39.234479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.841 [2024-12-06 17:40:39.286550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.841 [2024-12-06 17:40:39.286602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.841 [2024-12-06 17:40:39.286611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.841 [2024-12-06 17:40:39.286618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.841 [2024-12-06 17:40:39.286624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.841 [2024-12-06 17:40:39.288591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.841 [2024-12-06 17:40:39.288753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.841 [2024-12-06 17:40:39.288912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.841 [2024-12-06 17:40:39.288912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:48.102 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.102 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:48.102 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.102 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.102 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.102 [2024-12-06 17:40:40.012580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.102 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.103 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.103 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:48.103 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:48.103 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:48.103 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.103 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.103 Malloc1 00:24:48.103 [2024-12-06 17:40:40.137840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.103 Malloc2 00:24:48.363 Malloc3 00:24:48.363 Malloc4 00:24:48.363 Malloc5 00:24:48.363 Malloc6 00:24:48.363 Malloc7 00:24:48.624 Malloc8 00:24:48.624 Malloc9 00:24:48.624 Malloc10 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1684548 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1684548 /var/tmp/bdevperf.sock 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1684548 ']' 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.624 { 00:24:48.624 "params": { 00:24:48.624 "name": "Nvme$subsystem", 00:24:48.624 "trtype": "$TEST_TRANSPORT", 00:24:48.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.624 "adrfam": "ipv4", 00:24:48.624 "trsvcid": "$NVMF_PORT", 00:24:48.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.624 "hdgst": ${hdgst:-false}, 00:24:48.624 "ddgst": ${ddgst:-false} 00:24:48.624 }, 00:24:48.624 "method": "bdev_nvme_attach_controller" 00:24:48.624 } 00:24:48.624 EOF 00:24:48.624 )") 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.624 { 00:24:48.624 "params": { 00:24:48.624 "name": "Nvme$subsystem", 00:24:48.624 "trtype": "$TEST_TRANSPORT", 00:24:48.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.624 "adrfam": "ipv4", 00:24:48.624 "trsvcid": "$NVMF_PORT", 00:24:48.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.624 "hdgst": ${hdgst:-false}, 00:24:48.624 "ddgst": ${ddgst:-false} 00:24:48.624 }, 00:24:48.624 "method": "bdev_nvme_attach_controller" 00:24:48.624 } 00:24:48.624 EOF 00:24:48.624 )") 00:24:48.624 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 [2024-12-06 17:40:40.668857] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:48.625 [2024-12-06 17:40:40.668933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.625 { 00:24:48.625 "params": { 00:24:48.625 "name": "Nvme$subsystem", 00:24:48.625 "trtype": "$TEST_TRANSPORT", 00:24:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.625 "adrfam": "ipv4", 00:24:48.625 "trsvcid": "$NVMF_PORT", 00:24:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.625 "hdgst": ${hdgst:-false}, 00:24:48.625 "ddgst": ${ddgst:-false} 00:24:48.625 }, 00:24:48.625 "method": "bdev_nvme_attach_controller" 00:24:48.625 } 00:24:48.625 EOF 00:24:48.625 )") 00:24:48.625 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:48.886 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:48.886 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:48.886 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme1", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme2", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme3", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme4", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme5", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme6", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme7", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme8", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme9", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:48.886 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:48.886 "hdgst": false, 00:24:48.886 "ddgst": false 00:24:48.886 }, 00:24:48.886 "method": "bdev_nvme_attach_controller" 00:24:48.886 },{ 00:24:48.886 "params": { 00:24:48.886 "name": "Nvme10", 00:24:48.886 "trtype": "tcp", 00:24:48.886 "traddr": "10.0.0.2", 00:24:48.886 "adrfam": "ipv4", 00:24:48.886 "trsvcid": "4420", 00:24:48.886 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:48.887 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:48.887 "hdgst": false, 00:24:48.887 "ddgst": false 00:24:48.887 }, 00:24:48.887 "method": "bdev_nvme_attach_controller" 00:24:48.887 }' 00:24:48.887 [2024-12-06 17:40:40.763527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.887 [2024-12-06 17:40:40.816929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1684548 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:50.271 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:51.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1684548 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1684479 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.211 { 00:24:51.211 "params": { 00:24:51.211 "name": "Nvme$subsystem", 00:24:51.211 "trtype": "$TEST_TRANSPORT", 00:24:51.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.211 "adrfam": "ipv4", 00:24:51.211 "trsvcid": "$NVMF_PORT", 00:24:51.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.211 "hdgst": ${hdgst:-false}, 00:24:51.211 "ddgst": ${ddgst:-false} 00:24:51.211 }, 00:24:51.211 "method": "bdev_nvme_attach_controller" 00:24:51.211 } 00:24:51.211 EOF 00:24:51.211 )") 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.211 { 00:24:51.211 "params": { 00:24:51.211 "name": "Nvme$subsystem", 00:24:51.211 "trtype": "$TEST_TRANSPORT", 00:24:51.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.211 "adrfam": "ipv4", 00:24:51.211 "trsvcid": "$NVMF_PORT", 00:24:51.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.211 "hdgst": ${hdgst:-false}, 00:24:51.211 "ddgst": ${ddgst:-false} 00:24:51.211 }, 00:24:51.211 "method": "bdev_nvme_attach_controller" 00:24:51.211 } 00:24:51.211 EOF 00:24:51.211 )") 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.211 { 00:24:51.211 "params": { 00:24:51.211 "name": "Nvme$subsystem", 00:24:51.211 "trtype": "$TEST_TRANSPORT", 00:24:51.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.211 "adrfam": "ipv4", 00:24:51.211 "trsvcid": "$NVMF_PORT", 00:24:51.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.211 "hdgst": ${hdgst:-false}, 00:24:51.211 "ddgst": ${ddgst:-false} 00:24:51.211 }, 00:24:51.211 "method": "bdev_nvme_attach_controller" 00:24:51.211 } 00:24:51.211 EOF 00:24:51.211 )") 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.211 { 00:24:51.211 "params": { 00:24:51.211 "name": "Nvme$subsystem", 00:24:51.211 "trtype": "$TEST_TRANSPORT", 00:24:51.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.211 "adrfam": "ipv4", 00:24:51.211 "trsvcid": "$NVMF_PORT", 00:24:51.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.211 "hdgst": ${hdgst:-false}, 00:24:51.211 "ddgst": ${ddgst:-false} 00:24:51.211 }, 00:24:51.211 "method": "bdev_nvme_attach_controller" 00:24:51.211 } 00:24:51.211 EOF 00:24:51.211 )") 00:24:51.211 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.473 { 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme$subsystem", 00:24:51.473 "trtype": "$TEST_TRANSPORT", 00:24:51.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "$NVMF_PORT", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.473 "hdgst": ${hdgst:-false}, 00:24:51.473 "ddgst": ${ddgst:-false} 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 } 00:24:51.473 EOF 00:24:51.473 )") 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.473 { 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme$subsystem", 00:24:51.473 "trtype": "$TEST_TRANSPORT", 00:24:51.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "$NVMF_PORT", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.473 "hdgst": ${hdgst:-false}, 00:24:51.473 "ddgst": ${ddgst:-false} 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 } 00:24:51.473 EOF 00:24:51.473 )") 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.473 { 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme$subsystem", 00:24:51.473 "trtype": "$TEST_TRANSPORT", 00:24:51.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "$NVMF_PORT", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.473 "hdgst": ${hdgst:-false}, 00:24:51.473 "ddgst": ${ddgst:-false} 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 } 00:24:51.473 EOF 00:24:51.473 )") 00:24:51.473 [2024-12-06 17:40:43.294121] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:51.473 [2024-12-06 17:40:43.294177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684612 ] 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.473 { 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme$subsystem", 00:24:51.473 "trtype": "$TEST_TRANSPORT", 00:24:51.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "$NVMF_PORT", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.473 "hdgst": ${hdgst:-false}, 00:24:51.473 "ddgst": ${ddgst:-false} 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 } 00:24:51.473 EOF 00:24:51.473 )") 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.473 { 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme$subsystem", 00:24:51.473 "trtype": "$TEST_TRANSPORT", 00:24:51.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "$NVMF_PORT", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.473 "hdgst": ${hdgst:-false}, 00:24:51.473 "ddgst": ${ddgst:-false} 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 } 00:24:51.473 EOF 00:24:51.473 )") 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:51.473 { 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme$subsystem", 00:24:51.473 "trtype": "$TEST_TRANSPORT", 00:24:51.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "$NVMF_PORT", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:51.473 "hdgst": ${hdgst:-false}, 00:24:51.473 "ddgst": ${ddgst:-false} 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 } 00:24:51.473 EOF 00:24:51.473 )") 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:51.473 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme1", 00:24:51.473 "trtype": "tcp", 00:24:51.473 "traddr": "10.0.0.2", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "4420", 00:24:51.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.473 "hdgst": false, 00:24:51.473 "ddgst": false 00:24:51.473 }, 00:24:51.473 "method": "bdev_nvme_attach_controller" 00:24:51.473 },{ 00:24:51.473 "params": { 00:24:51.473 "name": "Nvme2", 00:24:51.473 "trtype": "tcp", 00:24:51.473 "traddr": "10.0.0.2", 00:24:51.473 "adrfam": "ipv4", 00:24:51.473 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme3", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme4", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme5", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme6", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme7", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme8", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme9", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 },{ 00:24:51.474 "params": { 00:24:51.474 "name": "Nvme10", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "10.0.0.2", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:51.474 "hdgst": false, 00:24:51.474 "ddgst": false 00:24:51.474 }, 00:24:51.474 "method": "bdev_nvme_attach_controller" 00:24:51.474 }' 00:24:51.474 [2024-12-06 17:40:43.385789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.474 [2024-12-06 17:40:43.421714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.855 Running I/O for 1 seconds... 00:24:54.053 1866.00 IOPS, 116.62 MiB/s 00:24:54.053 Latency(us) 00:24:54.053 [2024-12-06T16:40:46.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.053 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme1n1 : 1.12 228.01 14.25 0.00 0.00 277852.16 19005.44 251658.24 00:24:54.053 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme2n1 : 1.14 225.16 14.07 0.00 0.00 276665.39 20643.84 249910.61 00:24:54.053 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme3n1 : 1.16 275.82 17.24 0.00 0.00 219978.33 9175.04 228939.09 00:24:54.053 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme4n1 : 1.06 241.67 15.10 0.00 0.00 247310.76 1201.49 251658.24 00:24:54.053 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme5n1 : 1.09 233.81 14.61 0.00 0.00 251860.48 38229.33 225443.84 00:24:54.053 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme6n1 : 1.13 226.93 14.18 0.00 0.00 255208.32 16602.45 253405.87 00:24:54.053 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.053 Nvme7n1 : 1.13 225.94 14.12 0.00 0.00 252033.49 19551.57 248162.99 00:24:54.053 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.053 Verification LBA range: start 0x0 length 0x400 00:24:54.054 Nvme8n1 : 1.19 269.04 16.82 0.00 0.00 205555.46 11578.03 258648.75 00:24:54.054 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.054 Verification LBA range: start 0x0 length 0x400 00:24:54.054 Nvme9n1 : 1.21 265.52 16.60 0.00 0.00 208355.58 11687.25 253405.87 00:24:54.054 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:54.054 Verification LBA range: start 0x0 length 0x400 00:24:54.054 Nvme10n1 : 1.21 262.34 16.40 0.00 0.00 207171.47 8082.77 283115.52 00:24:54.054 [2024-12-06T16:40:46.120Z] =================================================================================================================== 00:24:54.054 [2024-12-06T16:40:46.120Z] Total : 2454.24 153.39 0.00 0.00 237502.90 1201.49 283115.52 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.312 rmmod nvme_tcp 00:24:54.312 rmmod nvme_fabrics 00:24:54.312 rmmod nvme_keyring 00:24:54.312 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1684479 ']' 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1684479 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1684479 ']' 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1684479 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684479 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684479' 00:24:54.313 killing process with pid 1684479 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1684479 00:24:54.313 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1684479 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.572 17:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.504 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.767 00:24:56.767 real 0m16.986s 00:24:56.767 user 0m35.065s 00:24:56.767 sys 0m6.858s 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:56.767 ************************************ 00:24:56.767 END TEST nvmf_shutdown_tc1 00:24:56.767 ************************************ 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:56.767 ************************************ 00:24:56.767 START TEST nvmf_shutdown_tc2 00:24:56.767 ************************************ 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:56.767 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.767 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:56.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:56.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:56.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.768 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:24:57.029 00:24:57.029 --- 10.0.0.2 ping statistics --- 00:24:57.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.029 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:24:57.029 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:24:57.029 00:24:57.029 --- 10.0.0.1 ping statistics --- 00:24:57.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.029 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.030 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1684787 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1684787 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1684787 ']' 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.030 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:57.292 [2024-12-06 17:40:49.104795] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:57.292 [2024-12-06 17:40:49.104854] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.292 [2024-12-06 17:40:49.195286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.292 [2024-12-06 17:40:49.225529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.292 [2024-12-06 17:40:49.225555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.292 [2024-12-06 17:40:49.225560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.292 [2024-12-06 17:40:49.225565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.292 [2024-12-06 17:40:49.225572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.292 [2024-12-06 17:40:49.226716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.292 [2024-12-06 17:40:49.226844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.292 [2024-12-06 17:40:49.226944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:57.292 [2024-12-06 17:40:49.226946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.863 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.863 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:57.863 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.863 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.863 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.123 [2024-12-06 17:40:49.948048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.123 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.124 Malloc1 00:24:58.124 [2024-12-06 17:40:50.057528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.124 Malloc2 00:24:58.124 Malloc3 00:24:58.124 Malloc4 00:24:58.124 Malloc5 00:24:58.385 Malloc6 00:24:58.385 Malloc7 00:24:58.385 Malloc8 00:24:58.385 Malloc9 00:24:58.385 Malloc10 00:24:58.385 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.385 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:58.385 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.385 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1684857 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1684857 /var/tmp/bdevperf.sock 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1684857 ']' 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.647 { 00:24:58.647 "params": { 00:24:58.647 "name": "Nvme$subsystem", 00:24:58.647 "trtype": "$TEST_TRANSPORT", 00:24:58.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.647 "adrfam": "ipv4", 00:24:58.647 "trsvcid": "$NVMF_PORT", 00:24:58.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.647 "hdgst": ${hdgst:-false}, 00:24:58.647 "ddgst": ${ddgst:-false} 00:24:58.647 }, 00:24:58.647 "method": "bdev_nvme_attach_controller" 00:24:58.647 } 00:24:58.647 EOF 00:24:58.647 )") 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.647 { 00:24:58.647 "params": { 00:24:58.647 "name": "Nvme$subsystem", 00:24:58.647 "trtype": "$TEST_TRANSPORT", 00:24:58.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.647 "adrfam": "ipv4", 00:24:58.647 "trsvcid": "$NVMF_PORT", 00:24:58.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.647 "hdgst": ${hdgst:-false}, 00:24:58.647 "ddgst": ${ddgst:-false} 00:24:58.647 }, 00:24:58.647 "method": "bdev_nvme_attach_controller" 00:24:58.647 } 00:24:58.647 EOF 00:24:58.647 )") 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.647 { 00:24:58.647 "params": { 00:24:58.647 "name": "Nvme$subsystem", 00:24:58.647 "trtype": "$TEST_TRANSPORT", 00:24:58.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.647 "adrfam": "ipv4", 00:24:58.647 "trsvcid": "$NVMF_PORT", 00:24:58.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.647 "hdgst": ${hdgst:-false}, 00:24:58.647 "ddgst": ${ddgst:-false} 00:24:58.647 }, 00:24:58.647 "method": "bdev_nvme_attach_controller" 00:24:58.647 } 00:24:58.647 EOF 00:24:58.647 )") 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.647 { 00:24:58.647 "params": { 00:24:58.647 "name": "Nvme$subsystem", 00:24:58.647 "trtype": "$TEST_TRANSPORT", 00:24:58.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.647 "adrfam": "ipv4", 00:24:58.647 "trsvcid": "$NVMF_PORT", 00:24:58.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.647 "hdgst": ${hdgst:-false}, 00:24:58.647 "ddgst": ${ddgst:-false} 00:24:58.647 }, 00:24:58.647 "method": "bdev_nvme_attach_controller" 00:24:58.647 } 00:24:58.647 EOF 00:24:58.647 )") 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.647 { 00:24:58.647 "params": { 00:24:58.647 "name": "Nvme$subsystem", 00:24:58.647 "trtype": "$TEST_TRANSPORT", 00:24:58.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.647 "adrfam": "ipv4", 00:24:58.647 "trsvcid": "$NVMF_PORT", 00:24:58.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.647 "hdgst": ${hdgst:-false}, 00:24:58.647 "ddgst": ${ddgst:-false} 00:24:58.647 }, 00:24:58.647 "method": "bdev_nvme_attach_controller" 00:24:58.647 } 00:24:58.647 EOF 00:24:58.647 )") 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.647 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.647 { 00:24:58.647 "params": { 00:24:58.647 "name": "Nvme$subsystem", 00:24:58.647 "trtype": "$TEST_TRANSPORT", 00:24:58.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.647 "adrfam": "ipv4", 00:24:58.647 "trsvcid": "$NVMF_PORT", 00:24:58.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.647 "hdgst": ${hdgst:-false}, 00:24:58.647 "ddgst": ${ddgst:-false} 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 } 00:24:58.648 EOF 00:24:58.648 )") 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.648 [2024-12-06 17:40:50.500146] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:24:58.648 [2024-12-06 17:40:50.500201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684857 ] 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.648 { 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme$subsystem", 00:24:58.648 "trtype": "$TEST_TRANSPORT", 00:24:58.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "$NVMF_PORT", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.648 "hdgst": ${hdgst:-false}, 00:24:58.648 "ddgst": ${ddgst:-false} 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 } 00:24:58.648 EOF 00:24:58.648 )") 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.648 { 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme$subsystem", 00:24:58.648 "trtype": "$TEST_TRANSPORT", 00:24:58.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "$NVMF_PORT", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.648 "hdgst": ${hdgst:-false}, 00:24:58.648 "ddgst": ${ddgst:-false} 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 } 00:24:58.648 EOF 00:24:58.648 )") 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.648 { 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme$subsystem", 00:24:58.648 "trtype": "$TEST_TRANSPORT", 00:24:58.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "$NVMF_PORT", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.648 "hdgst": ${hdgst:-false}, 00:24:58.648 "ddgst": ${ddgst:-false} 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 } 00:24:58.648 EOF 00:24:58.648 )") 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:58.648 { 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme$subsystem", 00:24:58.648 "trtype": "$TEST_TRANSPORT", 00:24:58.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "$NVMF_PORT", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.648 "hdgst": ${hdgst:-false}, 00:24:58.648 "ddgst": ${ddgst:-false} 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 } 00:24:58.648 EOF 00:24:58.648 )") 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:58.648 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme1", 00:24:58.648 "trtype": "tcp", 00:24:58.648 "traddr": "10.0.0.2", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "4420", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:58.648 "hdgst": false, 00:24:58.648 "ddgst": false 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 },{ 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme2", 00:24:58.648 "trtype": "tcp", 00:24:58.648 "traddr": "10.0.0.2", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "4420", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:58.648 "hdgst": false, 00:24:58.648 "ddgst": false 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 },{ 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme3", 00:24:58.648 "trtype": "tcp", 00:24:58.648 "traddr": "10.0.0.2", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "4420", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:58.648 "hdgst": false, 00:24:58.648 "ddgst": false 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 },{ 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme4", 00:24:58.648 "trtype": "tcp", 00:24:58.648 "traddr": "10.0.0.2", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "4420", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:58.648 "hdgst": false, 00:24:58.648 "ddgst": false 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 },{ 00:24:58.648 "params": { 00:24:58.648 "name": "Nvme5", 00:24:58.648 "trtype": "tcp", 00:24:58.648 "traddr": "10.0.0.2", 00:24:58.648 "adrfam": "ipv4", 00:24:58.648 "trsvcid": "4420", 00:24:58.648 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:58.648 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:58.648 "hdgst": false, 00:24:58.648 "ddgst": false 00:24:58.648 }, 00:24:58.648 "method": "bdev_nvme_attach_controller" 00:24:58.648 },{ 00:24:58.648 "params": { 00:24:58.649 "name": "Nvme6", 00:24:58.649 "trtype": "tcp", 00:24:58.649 "traddr": "10.0.0.2", 00:24:58.649 "adrfam": "ipv4", 00:24:58.649 "trsvcid": "4420", 00:24:58.649 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:58.649 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:58.649 "hdgst": false, 00:24:58.649 "ddgst": false 00:24:58.649 }, 00:24:58.649 "method": "bdev_nvme_attach_controller" 00:24:58.649 },{ 00:24:58.649 "params": { 00:24:58.649 "name": "Nvme7", 00:24:58.649 "trtype": "tcp", 00:24:58.649 "traddr": "10.0.0.2", 00:24:58.649 "adrfam": "ipv4", 00:24:58.649 "trsvcid": "4420", 00:24:58.649 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:58.649 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:58.649 "hdgst": false, 00:24:58.649 "ddgst": false 00:24:58.649 }, 00:24:58.649 "method": "bdev_nvme_attach_controller" 00:24:58.649 },{ 00:24:58.649 "params": { 00:24:58.649 "name": "Nvme8", 00:24:58.649 "trtype": "tcp", 00:24:58.649 "traddr": "10.0.0.2", 00:24:58.649 "adrfam": "ipv4", 00:24:58.649 "trsvcid": "4420", 00:24:58.649 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:58.649 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:58.649 "hdgst": false, 00:24:58.649 "ddgst": false 00:24:58.649 }, 00:24:58.649 "method": "bdev_nvme_attach_controller" 00:24:58.649 },{ 00:24:58.649 "params": { 00:24:58.649 "name": "Nvme9", 00:24:58.649 "trtype": "tcp", 00:24:58.649 "traddr": "10.0.0.2", 00:24:58.649 "adrfam": "ipv4", 00:24:58.649 "trsvcid": "4420", 00:24:58.649 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:58.649 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:58.649 "hdgst": false, 00:24:58.649 "ddgst": false 00:24:58.649 }, 00:24:58.649 "method": "bdev_nvme_attach_controller" 00:24:58.649 },{ 00:24:58.649 "params": { 00:24:58.649 "name": "Nvme10", 00:24:58.649 "trtype": "tcp", 00:24:58.649 "traddr": "10.0.0.2", 00:24:58.649 "adrfam": "ipv4", 00:24:58.649 "trsvcid": "4420", 00:24:58.649 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:58.649 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:58.649 "hdgst": false, 00:24:58.649 "ddgst": false 00:24:58.649 }, 00:24:58.649 "method": "bdev_nvme_attach_controller" 00:24:58.649 }' 00:24:58.649 [2024-12-06 17:40:50.588868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.649 [2024-12-06 17:40:50.625429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.558 Running I/O for 10 seconds... 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:00.558 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:00.818 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1684857 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1684857 ']' 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1684857 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.079 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684857 00:25:01.340 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.340 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.340 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684857' 00:25:01.340 killing process with pid 1684857 00:25:01.340 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1684857 00:25:01.340 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1684857 00:25:01.340 Received shutdown signal, test time was about 0.968993 seconds 00:25:01.340 00:25:01.340 Latency(us) 00:25:01.340 [2024-12-06T16:40:53.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.340 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme1n1 : 0.96 266.09 16.63 0.00 0.00 237341.33 13653.33 249910.61 00:25:01.340 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme2n1 : 0.96 267.62 16.73 0.00 0.00 231086.29 18240.85 244667.73 00:25:01.340 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme3n1 : 0.96 272.10 17.01 0.00 0.00 222345.37 3495.25 248162.99 00:25:01.340 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme4n1 : 0.96 265.83 16.61 0.00 0.00 223034.03 29928.11 234181.97 00:25:01.340 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme5n1 : 0.94 204.79 12.80 0.00 0.00 283007.15 18131.63 249910.61 00:25:01.340 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme6n1 : 0.93 205.90 12.87 0.00 0.00 274831.08 16165.55 241172.48 00:25:01.340 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme7n1 : 0.95 269.57 16.85 0.00 0.00 205544.11 15182.51 249910.61 00:25:01.340 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme8n1 : 0.97 255.14 15.95 0.00 0.00 211319.11 14527.15 241172.48 00:25:01.340 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme9n1 : 0.94 203.32 12.71 0.00 0.00 259300.98 21954.56 253405.87 00:25:01.340 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.340 Verification LBA range: start 0x0 length 0x400 00:25:01.340 Nvme10n1 : 0.95 202.72 12.67 0.00 0.00 253795.56 18786.99 267386.88 00:25:01.340 [2024-12-06T16:40:53.406Z] =================================================================================================================== 00:25:01.340 [2024-12-06T16:40:53.406Z] Total : 2413.09 150.82 0.00 0.00 237172.06 3495.25 267386.88 00:25:01.340 17:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.724 rmmod nvme_tcp 00:25:02.724 rmmod nvme_fabrics 00:25:02.724 rmmod nvme_keyring 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1684787 ']' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1684787 ']' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684787' 00:25:02.724 killing process with pid 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1684787 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.724 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.270 00:25:05.270 real 0m8.152s 00:25:05.270 user 0m25.141s 00:25:05.270 sys 0m1.306s 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.270 ************************************ 00:25:05.270 END TEST nvmf_shutdown_tc2 00:25:05.270 ************************************ 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:05.270 ************************************ 00:25:05.270 START TEST nvmf_shutdown_tc3 00:25:05.270 ************************************ 00:25:05.270 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:05.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:05.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:05.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:05.271 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:05.271 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.272 17:40:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:25:05.272 00:25:05.272 --- 10.0.0.2 ping statistics --- 00:25:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.272 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:05.272 00:25:05.272 --- 10.0.0.1 ping statistics --- 00:25:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.272 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1685059 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1685059 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1685059 ']' 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.272 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:05.532 [2024-12-06 17:40:57.339497] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:05.532 [2024-12-06 17:40:57.339564] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.532 [2024-12-06 17:40:57.433239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.532 [2024-12-06 17:40:57.467121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.532 [2024-12-06 17:40:57.467153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.532 [2024-12-06 17:40:57.467159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.532 [2024-12-06 17:40:57.467164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.532 [2024-12-06 17:40:57.467171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.532 [2024-12-06 17:40:57.468479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.532 [2024-12-06 17:40:57.468600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.532 [2024-12-06 17:40:57.468720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:05.532 [2024-12-06 17:40:57.468866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.103 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.103 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:06.103 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.103 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.103 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.363 [2024-12-06 17:40:58.184997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.363 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.363 Malloc1 00:25:06.363 [2024-12-06 17:40:58.291207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.363 Malloc2 00:25:06.363 Malloc3 00:25:06.363 Malloc4 00:25:06.363 Malloc5 00:25:06.624 Malloc6 00:25:06.624 Malloc7 00:25:06.624 Malloc8 00:25:06.624 Malloc9 00:25:06.624 Malloc10 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1685135 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1685135 /var/tmp/bdevperf.sock 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1685135 ']' 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:06.624 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.885 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 [2024-12-06 17:40:58.735070] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:06.886 [2024-12-06 17:40:58.735128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685135 ] 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.886 { 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme$subsystem", 00:25:06.886 "trtype": "$TEST_TRANSPORT", 00:25:06.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "$NVMF_PORT", 00:25:06.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.886 "hdgst": ${hdgst:-false}, 00:25:06.886 "ddgst": ${ddgst:-false} 00:25:06.886 }, 00:25:06.886 "method": "bdev_nvme_attach_controller" 00:25:06.886 } 00:25:06.886 EOF 00:25:06.886 )") 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:06.886 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:06.886 "params": { 00:25:06.886 "name": "Nvme1", 00:25:06.886 "trtype": "tcp", 00:25:06.886 "traddr": "10.0.0.2", 00:25:06.886 "adrfam": "ipv4", 00:25:06.886 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme2", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme3", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme4", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme5", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme6", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme7", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme8", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme9", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 },{ 00:25:06.887 "params": { 00:25:06.887 "name": "Nvme10", 00:25:06.887 "trtype": "tcp", 00:25:06.887 "traddr": "10.0.0.2", 00:25:06.887 "adrfam": "ipv4", 00:25:06.887 "trsvcid": "4420", 00:25:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:06.887 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:06.887 "hdgst": false, 00:25:06.887 "ddgst": false 00:25:06.887 }, 00:25:06.887 "method": "bdev_nvme_attach_controller" 00:25:06.887 }' 00:25:06.887 [2024-12-06 17:40:58.825602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.887 [2024-12-06 17:40:58.862021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.797 Running I/O for 10 seconds... 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1685059 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1685059 ']' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1685059 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685059 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685059' 00:25:09.390 killing process with pid 1685059 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1685059 00:25:09.390 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1685059 00:25:09.390 [2024-12-06 17:41:01.359901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.359997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.390 [2024-12-06 17:41:01.360058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.360250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadada0 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.361192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd970 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.391 [2024-12-06 17:41:01.362225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.362317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb290 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.363936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb760 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.364813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.392 [2024-12-06 17:41:01.364836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.364998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadbc50 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.393 [2024-12-06 17:41:01.365878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.365998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc120 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.394 [2024-12-06 17:41:01.366893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.366995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadc5f0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.367996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.368224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.395 [2024-12-06 17:41:01.373562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.395 [2024-12-06 17:41:01.373598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.395 [2024-12-06 17:41:01.373616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.395 [2024-12-06 17:41:01.373624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.373985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.373992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.396 [2024-12-06 17:41:01.374294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.396 [2024-12-06 17:41:01.374303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.397 [2024-12-06 17:41:01.374685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704c90 is same with the state(6) to be set 00:25:09.397 [2024-12-06 17:41:01.374891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.374956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620610 is same with the state(6) to be set 00:25:09.397 [2024-12-06 17:41:01.374983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.374992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d6c0 is same with the state(6) to be set 00:25:09.397 [2024-12-06 17:41:01.375065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.397 [2024-12-06 17:41:01.375112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.397 [2024-12-06 17:41:01.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b760c0 is same with the state(6) to be set 00:25:09.398 [2024-12-06 17:41:01.375161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b329e0 is same with the state(6) to be set 00:25:09.398 [2024-12-06 17:41:01.375249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1707960 is same with the state(6) to be set 00:25:09.398 [2024-12-06 17:41:01.375334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1708460 is same with the state(6) to be set 00:25:09.398 [2024-12-06 17:41:01.375420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b335b0 is same with the state(6) to be set 00:25:09.398 [2024-12-06 17:41:01.375507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.398 [2024-12-06 17:41:01.375560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17088d0 is same with the state(6) to be set 00:25:09.398 [2024-12-06 17:41:01.375870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.375891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.375911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.375928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.375949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.375966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.375983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.375992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.398 [2024-12-06 17:41:01.376160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.398 [2024-12-06 17:41:01.376167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.376460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.376468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.378403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.378496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadcfb0 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.379088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd480 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.379107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd480 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.379112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd480 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.379116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd480 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.379121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd480 is same with the state(6) to be set 00:25:09.399 [2024-12-06 17:41:01.388116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.399 [2024-12-06 17:41:01.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.399 [2024-12-06 17:41:01.388297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.388624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.388632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1704c90 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620610 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d6c0 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b760c0 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.400 [2024-12-06 17:41:01.390462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.400 [2024-12-06 17:41:01.390478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.400 [2024-12-06 17:41:01.390494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.400 [2024-12-06 17:41:01.390509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75e50 is same with the state(6) to be set 00:25:09.400 [2024-12-06 17:41:01.390534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b329e0 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1707960 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1708460 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b335b0 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17088d0 (9): Bad file descriptor 00:25:09.400 [2024-12-06 17:41:01.390872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.390987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.390995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.391004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.391011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.391021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.391028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.391039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.391046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.400 [2024-12-06 17:41:01.391067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.400 [2024-12-06 17:41:01.391077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.401 [2024-12-06 17:41:01.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.401 [2024-12-06 17:41:01.391755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.391963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.391970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.402 [2024-12-06 17:41:01.393761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.402 [2024-12-06 17:41:01.393768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.393986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.393996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.403 [2024-12-06 17:41:01.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.403 [2024-12-06 17:41:01.394350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.394357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.394367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.394374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.394384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.394391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.394472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:09.404 [2024-12-06 17:41:01.397442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:09.404 [2024-12-06 17:41:01.397944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.404 [2024-12-06 17:41:01.397983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7d6c0 with addr=10.0.0.2, port=4420 00:25:09.404 [2024-12-06 17:41:01.397995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d6c0 is same with the state(6) to be set 00:25:09.404 [2024-12-06 17:41:01.398573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:09.404 [2024-12-06 17:41:01.398597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:09.404 [2024-12-06 17:41:01.398611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75e50 (9): Bad file descriptor 00:25:09.404 [2024-12-06 17:41:01.398977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.404 [2024-12-06 17:41:01.399016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1620610 with addr=10.0.0.2, port=4420 00:25:09.404 [2024-12-06 17:41:01.399028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620610 is same with the state(6) to be set 00:25:09.404 [2024-12-06 17:41:01.399043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d6c0 (9): Bad file descriptor 00:25:09.404 [2024-12-06 17:41:01.399111] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:09.404 [2024-12-06 17:41:01.399154] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:09.404 [2024-12-06 17:41:01.399190] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:09.404 [2024-12-06 17:41:01.399228] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:09.404 [2024-12-06 17:41:01.399266] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:09.404 [2024-12-06 17:41:01.399582] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:09.404 [2024-12-06 17:41:01.400204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.404 [2024-12-06 17:41:01.400220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b335b0 with addr=10.0.0.2, port=4420 00:25:09.404 [2024-12-06 17:41:01.400233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b335b0 is same with the state(6) to be set 00:25:09.404 [2024-12-06 17:41:01.400257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620610 (9): Bad file descriptor 00:25:09.404 [2024-12-06 17:41:01.400268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:09.404 [2024-12-06 17:41:01.400274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:09.404 [2024-12-06 17:41:01.400283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:09.404 [2024-12-06 17:41:01.400291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:09.404 [2024-12-06 17:41:01.400884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.404 [2024-12-06 17:41:01.400922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75e50 with addr=10.0.0.2, port=4420 00:25:09.404 [2024-12-06 17:41:01.400933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75e50 is same with the state(6) to be set 00:25:09.404 [2024-12-06 17:41:01.400948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b335b0 (9): Bad file descriptor 00:25:09.404 [2024-12-06 17:41:01.400958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:09.404 [2024-12-06 17:41:01.400964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:09.404 [2024-12-06 17:41:01.400973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:09.404 [2024-12-06 17:41:01.400981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:09.404 [2024-12-06 17:41:01.401115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75e50 (9): Bad file descriptor 00:25:09.404 [2024-12-06 17:41:01.401128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:09.404 [2024-12-06 17:41:01.401134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:09.404 [2024-12-06 17:41:01.401142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:09.404 [2024-12-06 17:41:01.401148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:09.404 [2024-12-06 17:41:01.401183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.404 [2024-12-06 17:41:01.401524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.404 [2024-12-06 17:41:01.401532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.401984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.405 [2024-12-06 17:41:01.402209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.405 [2024-12-06 17:41:01.402218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.402226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.402235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.402243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.402252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.402259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.402268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.402276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.402285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.402293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.402301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190c840 is same with the state(6) to be set 00:25:09.406 [2024-12-06 17:41:01.403584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.403988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.403997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.406 [2024-12-06 17:41:01.404201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.406 [2024-12-06 17:41:01.404208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.404709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.404717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d8b0 is same with the state(6) to be set 00:25:09.407 [2024-12-06 17:41:01.405986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.407 [2024-12-06 17:41:01.406152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.407 [2024-12-06 17:41:01.406160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.408 [2024-12-06 17:41:01.406759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.408 [2024-12-06 17:41:01.406770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.406985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.406994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.407001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.407011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.407018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.407028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.407035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.407044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.407061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.407068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.407078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.407087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.407095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a970 is same with the state(6) to be set 00:25:09.409 [2024-12-06 17:41:01.408374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.409 [2024-12-06 17:41:01.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.409 [2024-12-06 17:41:01.408740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.408985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.408994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.410 [2024-12-06 17:41:01.409408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.410 [2024-12-06 17:41:01.409416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.409425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.409432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.409442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.409451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.409461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.409468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.409478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.409486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.409494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0e100 is same with the state(6) to be set 00:25:09.411 [2024-12-06 17:41:01.410763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.410985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.410992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.411 [2024-12-06 17:41:01.411374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.411 [2024-12-06 17:41:01.411386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.411887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.411896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0f330 is same with the state(6) to be set 00:25:09.412 [2024-12-06 17:41:01.413157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.413170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.413183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.413205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.413214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.413226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.413234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.413243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.413251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.413261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.412 [2024-12-06 17:41:01.413268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.412 [2024-12-06 17:41:01.413278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.413 [2024-12-06 17:41:01.413956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.413 [2024-12-06 17:41:01.413966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.413975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.413983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.413992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.413999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.414 [2024-12-06 17:41:01.414276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.414 [2024-12-06 17:41:01.414284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2cf30 is same with the state(6) to be set 00:25:09.414 [2024-12-06 17:41:01.415855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:09.414 [2024-12-06 17:41:01.415884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:09.414 [2024-12-06 17:41:01.415896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:09.414 [2024-12-06 17:41:01.415908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:09.414 [2024-12-06 17:41:01.415947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:09.414 [2024-12-06 17:41:01.415955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:09.414 [2024-12-06 17:41:01.415963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:09.414 [2024-12-06 17:41:01.415971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:09.414 [2024-12-06 17:41:01.416024] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:25:09.414 [2024-12-06 17:41:01.416036] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:25:09.414 [2024-12-06 17:41:01.416111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:09.414 task offset: 18176 on job bdev=Nvme10n1 fails 00:25:09.414 00:25:09.414 Latency(us) 00:25:09.414 [2024-12-06T16:41:01.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme1n1 ended in about 0.83 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme1n1 : 0.83 154.43 9.65 77.22 0.00 272761.17 23592.96 255153.49 00:25:09.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme2n1 ended in about 0.83 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme2n1 : 0.83 153.99 9.62 76.99 0.00 267128.60 14964.05 251658.24 00:25:09.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme3n1 ended in about 0.83 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme3n1 : 0.83 153.55 9.60 76.77 0.00 261423.50 18022.40 251658.24 00:25:09.414 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme4n1 ended in about 0.84 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme4n1 : 0.84 153.11 9.57 76.55 0.00 255677.30 10540.37 251658.24 00:25:09.414 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme5n1 ended in about 0.84 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme5n1 : 0.84 158.64 9.91 76.34 0.00 243731.82 22282.24 228939.09 00:25:09.414 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme6n1 ended in about 0.82 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme6n1 : 0.82 155.86 9.74 77.93 0.00 237895.96 19114.67 256901.12 00:25:09.414 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme7n1 ended in about 0.82 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme7n1 : 0.82 234.57 14.66 78.19 0.00 172847.15 16602.45 251658.24 00:25:09.414 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme8n1 ended in about 0.84 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme8n1 : 0.84 152.24 9.51 76.12 0.00 231491.13 23483.73 221948.59 00:25:09.414 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme9n1 ended in about 0.82 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme9n1 : 0.82 155.62 9.73 77.81 0.00 219034.17 7154.35 251658.24 00:25:09.414 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.414 Job: Nvme10n1 ended in about 0.82 seconds with error 00:25:09.414 Verification LBA range: start 0x0 length 0x400 00:25:09.414 Nvme10n1 : 0.82 156.93 9.81 78.47 0.00 210420.62 19223.89 270882.13 00:25:09.414 [2024-12-06T16:41:01.480Z] =================================================================================================================== 00:25:09.414 [2024-12-06T16:41:01.480Z] Total : 1628.94 101.81 772.39 0.00 235185.46 7154.35 270882.13 00:25:09.676 [2024-12-06 17:41:01.442500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:09.676 [2024-12-06 17:41:01.442550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:09.676 [2024-12-06 17:41:01.442879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.676 [2024-12-06 17:41:01.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17088d0 with addr=10.0.0.2, port=4420 00:25:09.676 [2024-12-06 17:41:01.442911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17088d0 is same with the state(6) to be set 00:25:09.676 [2024-12-06 17:41:01.443133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.676 [2024-12-06 17:41:01.443144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1708460 with addr=10.0.0.2, port=4420 00:25:09.676 [2024-12-06 17:41:01.443151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1708460 is same with the state(6) to be set 00:25:09.676 [2024-12-06 17:41:01.443433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.676 [2024-12-06 17:41:01.443444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1704c90 with addr=10.0.0.2, port=4420 00:25:09.676 [2024-12-06 17:41:01.443451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1704c90 is same with the state(6) to be set 00:25:09.676 [2024-12-06 17:41:01.443753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.676 [2024-12-06 17:41:01.443763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1707960 with addr=10.0.0.2, port=4420 00:25:09.676 [2024-12-06 17:41:01.443770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1707960 is same with the state(6) to be set 00:25:09.676 [2024-12-06 17:41:01.445384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:09.676 [2024-12-06 17:41:01.445401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:09.676 [2024-12-06 17:41:01.445411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:09.676 [2024-12-06 17:41:01.445420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:09.676 [2024-12-06 17:41:01.445811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.676 [2024-12-06 17:41:01.445825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b329e0 with addr=10.0.0.2, port=4420 00:25:09.676 [2024-12-06 17:41:01.445832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b329e0 is same with the state(6) to be set 00:25:09.676 [2024-12-06 17:41:01.446176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.676 [2024-12-06 17:41:01.446186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b760c0 with addr=10.0.0.2, port=4420 00:25:09.676 [2024-12-06 17:41:01.446193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b760c0 is same with the state(6) to be set 00:25:09.676 [2024-12-06 17:41:01.446207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17088d0 (9): Bad file descriptor 00:25:09.676 [2024-12-06 17:41:01.446219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1708460 (9): Bad file descriptor 00:25:09.676 [2024-12-06 17:41:01.446229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1704c90 (9): Bad file descriptor 00:25:09.676 [2024-12-06 17:41:01.446238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1707960 (9): Bad file descriptor 00:25:09.676 [2024-12-06 17:41:01.446275] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:25:09.676 [2024-12-06 17:41:01.446288] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:25:09.676 [2024-12-06 17:41:01.446298] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:25:09.676 [2024-12-06 17:41:01.446309] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:25:09.676 [2024-12-06 17:41:01.446646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.677 [2024-12-06 17:41:01.446659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7d6c0 with addr=10.0.0.2, port=4420 00:25:09.677 [2024-12-06 17:41:01.446667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d6c0 is same with the state(6) to be set 00:25:09.677 [2024-12-06 17:41:01.446835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.677 [2024-12-06 17:41:01.446845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1620610 with addr=10.0.0.2, port=4420 00:25:09.677 [2024-12-06 17:41:01.446856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620610 is same with the state(6) to be set 00:25:09.677 [2024-12-06 17:41:01.447242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.677 [2024-12-06 17:41:01.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b335b0 with addr=10.0.0.2, port=4420 00:25:09.677 [2024-12-06 17:41:01.447259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b335b0 is same with the state(6) to be set 00:25:09.677 [2024-12-06 17:41:01.447605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.677 [2024-12-06 17:41:01.447615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75e50 with addr=10.0.0.2, port=4420 00:25:09.677 [2024-12-06 17:41:01.447622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75e50 is same with the state(6) to be set 00:25:09.677 [2024-12-06 17:41:01.447631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b329e0 (9): Bad file descriptor 00:25:09.677 [2024-12-06 17:41:01.447644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b760c0 (9): Bad file descriptor 00:25:09.677 [2024-12-06 17:41:01.447654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d6c0 (9): Bad file descriptor 00:25:09.677 [2024-12-06 17:41:01.447844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620610 (9): Bad file descriptor 00:25:09.677 [2024-12-06 17:41:01.447854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b335b0 (9): Bad file descriptor 00:25:09.677 [2024-12-06 17:41:01.447863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75e50 (9): Bad file descriptor 00:25:09.677 [2024-12-06 17:41:01.447874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.447976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.447982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.447990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.447996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.448003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.448010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.448017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.448024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:09.677 [2024-12-06 17:41:01.448031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:09.677 [2024-12-06 17:41:01.448037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:09.677 [2024-12-06 17:41:01.448044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:09.677 [2024-12-06 17:41:01.448051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:09.677 17:41:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:10.619 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1685135 00:25:10.619 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:10.619 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1685135 00:25:10.619 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1685135 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.620 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.620 rmmod nvme_tcp 00:25:10.620 rmmod nvme_fabrics 00:25:10.620 rmmod nvme_keyring 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1685059 ']' 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1685059 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1685059 ']' 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1685059 00:25:10.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1685059) - No such process 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1685059 is not found' 00:25:10.948 Process with pid 1685059 is not found 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.948 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.949 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.879 00:25:12.879 real 0m7.883s 00:25:12.879 user 0m19.698s 00:25:12.879 sys 0m1.240s 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:12.879 ************************************ 00:25:12.879 END TEST nvmf_shutdown_tc3 00:25:12.879 ************************************ 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:12.879 ************************************ 00:25:12.879 START TEST nvmf_shutdown_tc4 00:25:12.879 ************************************ 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.879 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:12.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:12.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:12.880 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:12.880 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.880 17:41:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:25:13.140 00:25:13.140 --- 10.0.0.2 ping statistics --- 00:25:13.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.140 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:25:13.140 00:25:13.140 --- 10.0.0.1 ping statistics --- 00:25:13.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.140 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.140 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1685336 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1685336 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1685336 ']' 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.399 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:13.399 [2024-12-06 17:41:05.320626] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:13.399 [2024-12-06 17:41:05.320706] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.399 [2024-12-06 17:41:05.414199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.399 [2024-12-06 17:41:05.448168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.399 [2024-12-06 17:41:05.448199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.399 [2024-12-06 17:41:05.448205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.399 [2024-12-06 17:41:05.448210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.399 [2024-12-06 17:41:05.448214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.399 [2024-12-06 17:41:05.449515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.399 [2024-12-06 17:41:05.449681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.399 [2024-12-06 17:41:05.449880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.399 [2024-12-06 17:41:05.449882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.338 [2024-12-06 17:41:06.165606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.338 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.339 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.339 Malloc1 00:25:14.339 [2024-12-06 17:41:06.272462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.339 Malloc2 00:25:14.339 Malloc3 00:25:14.339 Malloc4 00:25:14.339 Malloc5 00:25:14.600 Malloc6 00:25:14.600 Malloc7 00:25:14.600 Malloc8 00:25:14.600 Malloc9 00:25:14.600 Malloc10 00:25:14.600 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.600 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:14.600 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.600 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.860 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1685404 00:25:14.860 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:14.860 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:14.860 [2024-12-06 17:41:06.751131] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1685336 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1685336 ']' 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1685336 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685336 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685336' 00:25:20.155 killing process with pid 1685336 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1685336 00:25:20.155 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1685336 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 [2024-12-06 17:41:11.749956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.155 starting I/O failed: -6 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.155 [2024-12-06 17:41:11.750943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.155 Write completed with error (sct=0, sc=8) 00:25:20.155 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42070 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42070 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42070 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42070 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42070 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with starting I/O failed: -6 00:25:20.156 the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42540 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42a10 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42a10 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42a10 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 [2024-12-06 17:41:11.751699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42a10 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42a10 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.751710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42a10 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.751983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.752008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.752015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with Write completed with error (sct=0, sc=8) 00:25:20.156 the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.752021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.752026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.752031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with Write completed with error (sct=0, sc=8) 00:25:20.156 the state(6) to be set 00:25:20.156 [2024-12-06 17:41:11.752037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with the state(6) to be set 00:25:20.156 starting I/O failed: -6 00:25:20.156 [2024-12-06 17:41:11.752042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41ba0 is same with the state(6) to be set 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.156 Write completed with error (sct=0, sc=8) 00:25:20.156 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 [2024-12-06 17:41:11.752920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d416d0 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.752934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d416d0 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.752939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d416d0 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.752944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d416d0 is same with the state(6) to be set 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 [2024-12-06 17:41:11.753132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40860 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.753146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40860 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.753152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40860 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.753156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40860 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.753161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40860 is same with the state(6) to be set 00:25:20.157 [2024-12-06 17:41:11.753167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40860 is same with the state(6) to be set 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 [2024-12-06 17:41:11.753455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.157 NVMe io qpair process completion error 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 [2024-12-06 17:41:11.754684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.157 starting I/O failed: -6 00:25:20.157 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 [2024-12-06 17:41:11.755493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 [2024-12-06 17:41:11.756400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 starting I/O failed: -6 00:25:20.158 [2024-12-06 17:41:11.757803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.158 NVMe io qpair process completion error 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.158 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 [2024-12-06 17:41:11.759030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.159 starting I/O failed: -6 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 [2024-12-06 17:41:11.759971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 [2024-12-06 17:41:11.760881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.159 Write completed with error (sct=0, sc=8) 00:25:20.159 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 [2024-12-06 17:41:11.762347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.160 NVMe io qpair process completion error 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 [2024-12-06 17:41:11.763789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 starting I/O failed: -6 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.160 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 [2024-12-06 17:41:11.764603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 [2024-12-06 17:41:11.765523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.161 starting I/O failed: -6 00:25:20.161 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 [2024-12-06 17:41:11.768511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.162 NVMe io qpair process completion error 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 [2024-12-06 17:41:11.769823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.162 starting I/O failed: -6 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 [2024-12-06 17:41:11.770694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 Write completed with error (sct=0, sc=8) 00:25:20.162 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 [2024-12-06 17:41:11.771633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 [2024-12-06 17:41:11.773298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.163 NVMe io qpair process completion error 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 [2024-12-06 17:41:11.774390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.163 starting I/O failed: -6 00:25:20.163 starting I/O failed: -6 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.163 starting I/O failed: -6 00:25:20.163 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 [2024-12-06 17:41:11.775352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 [2024-12-06 17:41:11.776264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.164 starting I/O failed: -6 00:25:20.164 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 [2024-12-06 17:41:11.779114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.165 NVMe io qpair process completion error 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 [2024-12-06 17:41:11.780224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 [2024-12-06 17:41:11.781118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.165 starting I/O failed: -6 00:25:20.165 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 [2024-12-06 17:41:11.782037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 [2024-12-06 17:41:11.783618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.166 NVMe io qpair process completion error 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 [2024-12-06 17:41:11.784807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.166 starting I/O failed: -6 00:25:20.166 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 [2024-12-06 17:41:11.785627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 [2024-12-06 17:41:11.786582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.167 Write completed with error (sct=0, sc=8) 00:25:20.167 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 [2024-12-06 17:41:11.788578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.168 NVMe io qpair process completion error 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 [2024-12-06 17:41:11.789876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 [2024-12-06 17:41:11.790726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 starting I/O failed: -6 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.168 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 [2024-12-06 17:41:11.791660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 [2024-12-06 17:41:11.793814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.169 NVMe io qpair process completion error 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 [2024-12-06 17:41:11.795120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.169 Write completed with error (sct=0, sc=8) 00:25:20.169 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 [2024-12-06 17:41:11.796054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 [2024-12-06 17:41:11.796979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.170 Write completed with error (sct=0, sc=8) 00:25:20.170 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 Write completed with error (sct=0, sc=8) 00:25:20.171 starting I/O failed: -6 00:25:20.171 [2024-12-06 17:41:11.799160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.171 NVMe io qpair process completion error 00:25:20.171 Initializing NVMe Controllers 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:20.171 Controller IO queue size 128, less than required. 00:25:20.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:20.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:20.171 Initialization complete. Launching workers. 00:25:20.171 ======================================================== 00:25:20.171 Latency(us) 00:25:20.171 Device Information : IOPS MiB/s Average min max 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1866.77 80.21 68585.30 868.54 124566.93 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1915.80 82.32 66849.66 684.25 125681.08 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1906.73 81.93 67189.98 816.14 123383.26 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1924.44 82.69 66612.10 650.61 121574.93 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1919.04 82.46 66821.63 678.94 123577.39 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1902.41 81.74 67444.22 908.69 123393.97 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1893.98 81.38 67766.27 676.85 117851.34 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1887.72 81.11 68021.37 865.66 129949.14 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1884.70 80.98 68161.42 924.50 123252.63 00:25:20.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1893.77 81.37 67132.55 722.09 125928.59 00:25:20.171 ======================================================== 00:25:20.171 Total : 18995.35 816.21 67453.23 650.61 129949.14 00:25:20.171 00:25:20.171 [2024-12-06 17:41:11.804225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9560 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb720 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa740 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbae0 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfaa70 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9890 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa410 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9ef0 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9bc0 is same with the state(6) to be set 00:25:20.171 [2024-12-06 17:41:11.804518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb900 is same with the state(6) to be set 00:25:20.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:20.171 17:41:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:21.112 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1685404 00:25:21.112 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:25:21.112 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1685404 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1685404 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.113 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.113 rmmod nvme_tcp 00:25:21.113 rmmod nvme_fabrics 00:25:21.113 rmmod nvme_keyring 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1685336 ']' 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1685336 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1685336 ']' 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1685336 00:25:21.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1685336) - No such process 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1685336 is not found' 00:25:21.113 Process with pid 1685336 is not found 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.113 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.654 00:25:23.654 real 0m10.269s 00:25:23.654 user 0m28.086s 00:25:23.654 sys 0m3.899s 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:23.654 ************************************ 00:25:23.654 END TEST nvmf_shutdown_tc4 00:25:23.654 ************************************ 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:23.654 00:25:23.654 real 0m43.870s 00:25:23.654 user 1m48.261s 00:25:23.654 sys 0m13.643s 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:23.654 ************************************ 00:25:23.654 END TEST nvmf_shutdown 00:25:23.654 ************************************ 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:23.654 ************************************ 00:25:23.654 START TEST nvmf_nsid 00:25:23.654 ************************************ 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:23.654 * Looking for test storage... 00:25:23.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:23.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.654 --rc genhtml_branch_coverage=1 00:25:23.654 --rc genhtml_function_coverage=1 00:25:23.654 --rc genhtml_legend=1 00:25:23.654 --rc geninfo_all_blocks=1 00:25:23.654 --rc geninfo_unexecuted_blocks=1 00:25:23.654 00:25:23.654 ' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:23.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.654 --rc genhtml_branch_coverage=1 00:25:23.654 --rc genhtml_function_coverage=1 00:25:23.654 --rc genhtml_legend=1 00:25:23.654 --rc geninfo_all_blocks=1 00:25:23.654 --rc geninfo_unexecuted_blocks=1 00:25:23.654 00:25:23.654 ' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:23.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.654 --rc genhtml_branch_coverage=1 00:25:23.654 --rc genhtml_function_coverage=1 00:25:23.654 --rc genhtml_legend=1 00:25:23.654 --rc geninfo_all_blocks=1 00:25:23.654 --rc geninfo_unexecuted_blocks=1 00:25:23.654 00:25:23.654 ' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:23.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.654 --rc genhtml_branch_coverage=1 00:25:23.654 --rc genhtml_function_coverage=1 00:25:23.654 --rc genhtml_legend=1 00:25:23.654 --rc geninfo_all_blocks=1 00:25:23.654 --rc geninfo_unexecuted_blocks=1 00:25:23.654 00:25:23.654 ' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.654 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.655 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:31.793 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:31.793 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.793 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:31.794 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:31.794 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:25:31.794 00:25:31.794 --- 10.0.0.2 ping statistics --- 00:25:31.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.794 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:25:31.794 00:25:31.794 --- 10.0.0.1 ping statistics --- 00:25:31.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.794 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1687931 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1687931 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1687931 ']' 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.794 17:41:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:31.794 [2024-12-06 17:41:22.967825] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:31.794 [2024-12-06 17:41:22.967893] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.794 [2024-12-06 17:41:23.063409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.794 [2024-12-06 17:41:23.114292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.794 [2024-12-06 17:41:23.114343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.794 [2024-12-06 17:41:23.114351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.794 [2024-12-06 17:41:23.114359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.794 [2024-12-06 17:41:23.114365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.794 [2024-12-06 17:41:23.115167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1687962 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:31.794 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:31.795 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=288609cb-000b-437f-8835-b5e2e2b75d18 00:25:31.795 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:31.795 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4945a96e-a05a-47f3-8e70-22bf7cb0ae54 00:25:31.795 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=44734a32-b398-4b75-87bf-8229abdc4b58 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:32.056 null0 00:25:32.056 null1 00:25:32.056 null2 00:25:32.056 [2024-12-06 17:41:23.890712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.056 [2024-12-06 17:41:23.890979] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:32.056 [2024-12-06 17:41:23.891044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687962 ] 00:25:32.056 [2024-12-06 17:41:23.914949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1687962 /var/tmp/tgt2.sock 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1687962 ']' 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:32.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.056 17:41:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:32.056 [2024-12-06 17:41:23.986023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.056 [2024-12-06 17:41:24.038286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.316 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.316 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:32.316 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:32.576 [2024-12-06 17:41:24.600002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.576 [2024-12-06 17:41:24.616183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:32.837 nvme0n1 nvme0n2 00:25:32.837 nvme1n1 00:25:32.837 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:32.837 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:32.837 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:34.221 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 288609cb-000b-437f-8835-b5e2e2b75d18 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=288609cb000b437f8835b5e2e2b75d18 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 288609CB000B437F8835B5E2E2B75D18 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 288609CB000B437F8835B5E2E2B75D18 == \2\8\8\6\0\9\C\B\0\0\0\B\4\3\7\F\8\8\3\5\B\5\E\2\E\2\B\7\5\D\1\8 ]] 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:35.161 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4945a96e-a05a-47f3-8e70-22bf7cb0ae54 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4945a96ea05a47f38e7022bf7cb0ae54 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4945A96EA05A47F38E7022BF7CB0AE54 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4945A96EA05A47F38E7022BF7CB0AE54 == \4\9\4\5\A\9\6\E\A\0\5\A\4\7\F\3\8\E\7\0\2\2\B\F\7\C\B\0\A\E\5\4 ]] 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 44734a32-b398-4b75-87bf-8229abdc4b58 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=44734a32b3984b7587bf8229abdc4b58 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 44734A32B3984B7587BF8229ABDC4B58 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 44734A32B3984B7587BF8229ABDC4B58 == \4\4\7\3\4\A\3\2\B\3\9\8\4\B\7\5\8\7\B\F\8\2\2\9\A\B\D\C\4\B\5\8 ]] 00:25:35.421 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1687962 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1687962 ']' 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1687962 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687962 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687962' 00:25:35.682 killing process with pid 1687962 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1687962 00:25:35.682 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1687962 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.943 rmmod nvme_tcp 00:25:35.943 rmmod nvme_fabrics 00:25:35.943 rmmod nvme_keyring 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1687931 ']' 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1687931 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1687931 ']' 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1687931 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687931 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687931' 00:25:35.943 killing process with pid 1687931 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1687931 00:25:35.943 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1687931 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.204 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.117 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.117 00:25:38.117 real 0m14.840s 00:25:38.117 user 0m11.374s 00:25:38.117 sys 0m6.838s 00:25:38.117 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.117 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 ************************************ 00:25:38.117 END TEST nvmf_nsid 00:25:38.117 ************************************ 00:25:38.117 17:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:38.117 00:25:38.117 real 13m2.606s 00:25:38.117 user 27m18.672s 00:25:38.117 sys 3m53.367s 00:25:38.117 17:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.117 17:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 ************************************ 00:25:38.117 END TEST nvmf_target_extra 00:25:38.117 ************************************ 00:25:38.378 17:41:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:38.378 17:41:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.378 17:41:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.378 17:41:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.378 ************************************ 00:25:38.378 START TEST nvmf_host 00:25:38.378 ************************************ 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:38.378 * Looking for test storage... 00:25:38.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:38.378 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.641 --rc genhtml_branch_coverage=1 00:25:38.641 --rc genhtml_function_coverage=1 00:25:38.641 --rc genhtml_legend=1 00:25:38.641 --rc geninfo_all_blocks=1 00:25:38.641 --rc geninfo_unexecuted_blocks=1 00:25:38.641 00:25:38.641 ' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.641 --rc genhtml_branch_coverage=1 00:25:38.641 --rc genhtml_function_coverage=1 00:25:38.641 --rc genhtml_legend=1 00:25:38.641 --rc geninfo_all_blocks=1 00:25:38.641 --rc geninfo_unexecuted_blocks=1 00:25:38.641 00:25:38.641 ' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.641 --rc genhtml_branch_coverage=1 00:25:38.641 --rc genhtml_function_coverage=1 00:25:38.641 --rc genhtml_legend=1 00:25:38.641 --rc geninfo_all_blocks=1 00:25:38.641 --rc geninfo_unexecuted_blocks=1 00:25:38.641 00:25:38.641 ' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.641 --rc genhtml_branch_coverage=1 00:25:38.641 --rc genhtml_function_coverage=1 00:25:38.641 --rc genhtml_legend=1 00:25:38.641 --rc geninfo_all_blocks=1 00:25:38.641 --rc geninfo_unexecuted_blocks=1 00:25:38.641 00:25:38.641 ' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.641 ************************************ 00:25:38.641 START TEST nvmf_multicontroller 00:25:38.641 ************************************ 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:38.641 * Looking for test storage... 00:25:38.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:25:38.641 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.903 --rc genhtml_branch_coverage=1 00:25:38.903 --rc genhtml_function_coverage=1 00:25:38.903 --rc genhtml_legend=1 00:25:38.903 --rc geninfo_all_blocks=1 00:25:38.903 --rc geninfo_unexecuted_blocks=1 00:25:38.903 00:25:38.903 ' 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.903 --rc genhtml_branch_coverage=1 00:25:38.903 --rc genhtml_function_coverage=1 00:25:38.903 --rc genhtml_legend=1 00:25:38.903 --rc geninfo_all_blocks=1 00:25:38.903 --rc geninfo_unexecuted_blocks=1 00:25:38.903 00:25:38.903 ' 00:25:38.903 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.903 --rc genhtml_branch_coverage=1 00:25:38.903 --rc genhtml_function_coverage=1 00:25:38.903 --rc genhtml_legend=1 00:25:38.903 --rc geninfo_all_blocks=1 00:25:38.904 --rc geninfo_unexecuted_blocks=1 00:25:38.904 00:25:38.904 ' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:38.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.904 --rc genhtml_branch_coverage=1 00:25:38.904 --rc genhtml_function_coverage=1 00:25:38.904 --rc genhtml_legend=1 00:25:38.904 --rc geninfo_all_blocks=1 00:25:38.904 --rc geninfo_unexecuted_blocks=1 00:25:38.904 00:25:38.904 ' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.904 17:41:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:47.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:47.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.048 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:47.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:47.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:47.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:25:47.049 00:25:47.049 --- 10.0.0.2 ping statistics --- 00:25:47.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.049 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:25:47.049 17:41:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:47.049 00:25:47.049 --- 10.0.0.1 ping statistics --- 00:25:47.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.049 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1690553 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1690553 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1690553 ']' 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.049 [2024-12-06 17:41:38.118258] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:47.049 [2024-12-06 17:41:38.118324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.049 [2024-12-06 17:41:38.216416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:47.049 [2024-12-06 17:41:38.267866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.049 [2024-12-06 17:41:38.267920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.049 [2024-12-06 17:41:38.267929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.049 [2024-12-06 17:41:38.267936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.049 [2024-12-06 17:41:38.267943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.049 [2024-12-06 17:41:38.269713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.049 [2024-12-06 17:41:38.269880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.049 [2024-12-06 17:41:38.269880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.049 17:41:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.049 [2024-12-06 17:41:38.998350] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.049 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.049 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:47.049 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.049 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.049 Malloc0 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.050 [2024-12-06 17:41:39.070923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.050 [2024-12-06 17:41:39.082813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.050 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.050 Malloc1 00:25:47.311 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1690590 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1690590 /var/tmp/bdevperf.sock 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1690590 ']' 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.312 17:41:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.256 NVMe0n1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.256 1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.256 request: 00:25:48.256 { 00:25:48.256 "name": "NVMe0", 00:25:48.256 "trtype": "tcp", 00:25:48.256 "traddr": "10.0.0.2", 00:25:48.256 "adrfam": "ipv4", 00:25:48.256 "trsvcid": "4420", 00:25:48.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.256 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:48.256 "hostaddr": "10.0.0.1", 00:25:48.256 "prchk_reftag": false, 00:25:48.256 "prchk_guard": false, 00:25:48.256 "hdgst": false, 00:25:48.256 "ddgst": false, 00:25:48.256 "allow_unrecognized_csi": false, 00:25:48.256 "method": "bdev_nvme_attach_controller", 00:25:48.256 "req_id": 1 00:25:48.256 } 00:25:48.256 Got JSON-RPC error response 00:25:48.256 response: 00:25:48.256 { 00:25:48.256 "code": -114, 00:25:48.256 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:48.256 } 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.256 request: 00:25:48.256 { 00:25:48.256 "name": "NVMe0", 00:25:48.256 "trtype": "tcp", 00:25:48.256 "traddr": "10.0.0.2", 00:25:48.256 "adrfam": "ipv4", 00:25:48.256 "trsvcid": "4420", 00:25:48.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:48.256 "hostaddr": "10.0.0.1", 00:25:48.256 "prchk_reftag": false, 00:25:48.256 "prchk_guard": false, 00:25:48.256 "hdgst": false, 00:25:48.256 "ddgst": false, 00:25:48.256 "allow_unrecognized_csi": false, 00:25:48.256 "method": "bdev_nvme_attach_controller", 00:25:48.256 "req_id": 1 00:25:48.256 } 00:25:48.256 Got JSON-RPC error response 00:25:48.256 response: 00:25:48.256 { 00:25:48.256 "code": -114, 00:25:48.256 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:48.256 } 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.256 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.257 request: 00:25:48.257 { 00:25:48.257 "name": "NVMe0", 00:25:48.257 "trtype": "tcp", 00:25:48.257 "traddr": "10.0.0.2", 00:25:48.257 "adrfam": "ipv4", 00:25:48.257 "trsvcid": "4420", 00:25:48.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.257 "hostaddr": "10.0.0.1", 00:25:48.257 "prchk_reftag": false, 00:25:48.257 "prchk_guard": false, 00:25:48.257 "hdgst": false, 00:25:48.257 "ddgst": false, 00:25:48.257 "multipath": "disable", 00:25:48.257 "allow_unrecognized_csi": false, 00:25:48.257 "method": "bdev_nvme_attach_controller", 00:25:48.257 "req_id": 1 00:25:48.257 } 00:25:48.257 Got JSON-RPC error response 00:25:48.257 response: 00:25:48.257 { 00:25:48.257 "code": -114, 00:25:48.257 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:48.257 } 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:48.257 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.518 request: 00:25:48.518 { 00:25:48.518 "name": "NVMe0", 00:25:48.518 "trtype": "tcp", 00:25:48.518 "traddr": "10.0.0.2", 00:25:48.518 "adrfam": "ipv4", 00:25:48.518 "trsvcid": "4420", 00:25:48.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.518 "hostaddr": "10.0.0.1", 00:25:48.518 "prchk_reftag": false, 00:25:48.518 "prchk_guard": false, 00:25:48.518 "hdgst": false, 00:25:48.518 "ddgst": false, 00:25:48.518 "multipath": "failover", 00:25:48.518 "allow_unrecognized_csi": false, 00:25:48.518 "method": "bdev_nvme_attach_controller", 00:25:48.518 "req_id": 1 00:25:48.518 } 00:25:48.518 Got JSON-RPC error response 00:25:48.518 response: 00:25:48.518 { 00:25:48.518 "code": -114, 00:25:48.518 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:48.518 } 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.518 NVMe0n1 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.518 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:48.518 17:41:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.902 { 00:25:49.902 "results": [ 00:25:49.902 { 00:25:49.902 "job": "NVMe0n1", 00:25:49.902 "core_mask": "0x1", 00:25:49.902 "workload": "write", 00:25:49.902 "status": "finished", 00:25:49.902 "queue_depth": 128, 00:25:49.902 "io_size": 4096, 00:25:49.902 "runtime": 1.004271, 00:25:49.902 "iops": 28954.3360308124, 00:25:49.902 "mibps": 113.10287512036093, 00:25:49.902 "io_failed": 0, 00:25:49.902 "io_timeout": 0, 00:25:49.902 "avg_latency_us": 4410.905413944104, 00:25:49.902 "min_latency_us": 2116.266666666667, 00:25:49.902 "max_latency_us": 11304.96 00:25:49.902 } 00:25:49.902 ], 00:25:49.902 "core_count": 1 00:25:49.902 } 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1690590 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1690590 ']' 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1690590 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1690590 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1690590' 00:25:49.903 killing process with pid 1690590 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1690590 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1690590 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:49.903 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:49.903 [2024-12-06 17:41:39.209926] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:25:49.903 [2024-12-06 17:41:39.210005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690590 ] 00:25:49.903 [2024-12-06 17:41:39.303132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.903 [2024-12-06 17:41:39.355886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.903 [2024-12-06 17:41:40.537511] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 2f7bca6c-bcfa-444d-8bc2-418a6b3d1074 already exists 00:25:49.903 [2024-12-06 17:41:40.537543] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:2f7bca6c-bcfa-444d-8bc2-418a6b3d1074 alias for bdev NVMe1n1 00:25:49.903 [2024-12-06 17:41:40.537552] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:49.903 Running I/O for 1 seconds... 00:25:49.903 28937.00 IOPS, 113.04 MiB/s 00:25:49.903 Latency(us) 00:25:49.903 [2024-12-06T16:41:41.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.903 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:49.903 NVMe0n1 : 1.00 28954.34 113.10 0.00 0.00 4410.91 2116.27 11304.96 00:25:49.903 [2024-12-06T16:41:41.969Z] =================================================================================================================== 00:25:49.903 [2024-12-06T16:41:41.969Z] Total : 28954.34 113.10 0.00 0.00 4410.91 2116.27 11304.96 00:25:49.903 Received shutdown signal, test time was about 1.000000 seconds 00:25:49.903 00:25:49.903 Latency(us) 00:25:49.903 [2024-12-06T16:41:41.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.903 [2024-12-06T16:41:41.969Z] =================================================================================================================== 00:25:49.903 [2024-12-06T16:41:41.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.903 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.903 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.903 rmmod nvme_tcp 00:25:49.903 rmmod nvme_fabrics 00:25:50.164 rmmod nvme_keyring 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1690553 ']' 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1690553 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1690553 ']' 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1690553 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.164 17:41:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1690553 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1690553' 00:25:50.164 killing process with pid 1690553 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1690553 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1690553 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.164 17:41:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.747 00:25:52.747 real 0m13.745s 00:25:52.747 user 0m16.900s 00:25:52.747 sys 0m6.420s 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:52.747 ************************************ 00:25:52.747 END TEST nvmf_multicontroller 00:25:52.747 ************************************ 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.747 ************************************ 00:25:52.747 START TEST nvmf_aer 00:25:52.747 ************************************ 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:52.747 * Looking for test storage... 00:25:52.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:52.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.747 --rc genhtml_branch_coverage=1 00:25:52.747 --rc genhtml_function_coverage=1 00:25:52.747 --rc genhtml_legend=1 00:25:52.747 --rc geninfo_all_blocks=1 00:25:52.747 --rc geninfo_unexecuted_blocks=1 00:25:52.747 00:25:52.747 ' 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:52.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.747 --rc genhtml_branch_coverage=1 00:25:52.747 --rc genhtml_function_coverage=1 00:25:52.747 --rc genhtml_legend=1 00:25:52.747 --rc geninfo_all_blocks=1 00:25:52.747 --rc geninfo_unexecuted_blocks=1 00:25:52.747 00:25:52.747 ' 00:25:52.747 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:52.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.747 --rc genhtml_branch_coverage=1 00:25:52.747 --rc genhtml_function_coverage=1 00:25:52.747 --rc genhtml_legend=1 00:25:52.747 --rc geninfo_all_blocks=1 00:25:52.747 --rc geninfo_unexecuted_blocks=1 00:25:52.747 00:25:52.747 ' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:52.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.748 --rc genhtml_branch_coverage=1 00:25:52.748 --rc genhtml_function_coverage=1 00:25:52.748 --rc genhtml_legend=1 00:25:52.748 --rc geninfo_all_blocks=1 00:25:52.748 --rc geninfo_unexecuted_blocks=1 00:25:52.748 00:25:52.748 ' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.748 17:41:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:00.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:00.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.886 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:00.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:00.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:00.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:26:00.887 00:26:00.887 --- 10.0.0.2 ping statistics --- 00:26:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.887 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:26:00.887 00:26:00.887 --- 10.0.0.1 ping statistics --- 00:26:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.887 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:00.887 17:41:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1693082 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1693082 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1693082 ']' 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:00.887 [2024-12-06 17:41:52.099710] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:26:00.887 [2024-12-06 17:41:52.099770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.887 [2024-12-06 17:41:52.202725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.887 [2024-12-06 17:41:52.255679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.887 [2024-12-06 17:41:52.255737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.887 [2024-12-06 17:41:52.255746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.887 [2024-12-06 17:41:52.255753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.887 [2024-12-06 17:41:52.255759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.887 [2024-12-06 17:41:52.257711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.887 [2024-12-06 17:41:52.257873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.887 [2024-12-06 17:41:52.258035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.887 [2024-12-06 17:41:52.258035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.887 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 [2024-12-06 17:41:52.983497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.150 17:41:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 Malloc0 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 [2024-12-06 17:41:53.056453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.150 [ 00:26:01.150 { 00:26:01.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:01.150 "subtype": "Discovery", 00:26:01.150 "listen_addresses": [], 00:26:01.150 "allow_any_host": true, 00:26:01.150 "hosts": [] 00:26:01.150 }, 00:26:01.150 { 00:26:01.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.150 "subtype": "NVMe", 00:26:01.150 "listen_addresses": [ 00:26:01.150 { 00:26:01.150 "trtype": "TCP", 00:26:01.150 "adrfam": "IPv4", 00:26:01.150 "traddr": "10.0.0.2", 00:26:01.150 "trsvcid": "4420" 00:26:01.150 } 00:26:01.150 ], 00:26:01.150 "allow_any_host": true, 00:26:01.150 "hosts": [], 00:26:01.150 "serial_number": "SPDK00000000000001", 00:26:01.150 "model_number": "SPDK bdev Controller", 00:26:01.150 "max_namespaces": 2, 00:26:01.150 "min_cntlid": 1, 00:26:01.150 "max_cntlid": 65519, 00:26:01.150 "namespaces": [ 00:26:01.150 { 00:26:01.150 "nsid": 1, 00:26:01.150 "bdev_name": "Malloc0", 00:26:01.150 "name": "Malloc0", 00:26:01.150 "nguid": "3DBEDB2DA072425D914A3A64A2F6BB47", 00:26:01.150 "uuid": "3dbedb2d-a072-425d-914a-3a64a2f6bb47" 00:26:01.150 } 00:26:01.150 ] 00:26:01.150 } 00:26:01.150 ] 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1693120 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:01.150 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 Malloc1 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.412 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.412 Asynchronous Event Request test 00:26:01.412 Attaching to 10.0.0.2 00:26:01.412 Attached to 10.0.0.2 00:26:01.412 Registering asynchronous event callbacks... 00:26:01.412 Starting namespace attribute notice tests for all controllers... 00:26:01.412 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:01.412 aer_cb - Changed Namespace 00:26:01.412 Cleaning up... 00:26:01.412 [ 00:26:01.412 { 00:26:01.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:01.412 "subtype": "Discovery", 00:26:01.412 "listen_addresses": [], 00:26:01.412 "allow_any_host": true, 00:26:01.412 "hosts": [] 00:26:01.412 }, 00:26:01.412 { 00:26:01.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.412 "subtype": "NVMe", 00:26:01.412 "listen_addresses": [ 00:26:01.412 { 00:26:01.412 "trtype": "TCP", 00:26:01.412 "adrfam": "IPv4", 00:26:01.412 "traddr": "10.0.0.2", 00:26:01.412 "trsvcid": "4420" 00:26:01.412 } 00:26:01.412 ], 00:26:01.412 "allow_any_host": true, 00:26:01.412 "hosts": [], 00:26:01.412 "serial_number": "SPDK00000000000001", 00:26:01.412 "model_number": "SPDK bdev Controller", 00:26:01.412 "max_namespaces": 2, 00:26:01.412 "min_cntlid": 1, 00:26:01.412 "max_cntlid": 65519, 00:26:01.412 "namespaces": [ 00:26:01.412 { 00:26:01.412 "nsid": 1, 00:26:01.412 "bdev_name": "Malloc0", 00:26:01.412 "name": "Malloc0", 00:26:01.412 "nguid": "3DBEDB2DA072425D914A3A64A2F6BB47", 00:26:01.674 "uuid": "3dbedb2d-a072-425d-914a-3a64a2f6bb47" 00:26:01.674 }, 00:26:01.674 { 00:26:01.674 "nsid": 2, 00:26:01.674 "bdev_name": "Malloc1", 00:26:01.674 "name": "Malloc1", 00:26:01.674 "nguid": "9754AE1570464D918CB6BD71188A12C8", 00:26:01.674 "uuid": "9754ae15-7046-4d91-8cb6-bd71188a12c8" 00:26:01.674 } 00:26:01.674 ] 00:26:01.674 } 00:26:01.674 ] 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1693120 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.674 rmmod nvme_tcp 00:26:01.674 rmmod nvme_fabrics 00:26:01.674 rmmod nvme_keyring 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1693082 ']' 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1693082 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1693082 ']' 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1693082 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1693082 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1693082' 00:26:01.674 killing process with pid 1693082 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1693082 00:26:01.674 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1693082 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.936 17:41:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.481 00:26:04.481 real 0m11.581s 00:26:04.481 user 0m8.548s 00:26:04.481 sys 0m6.201s 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 ************************************ 00:26:04.481 END TEST nvmf_aer 00:26:04.481 ************************************ 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.481 17:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 ************************************ 00:26:04.481 START TEST nvmf_async_init 00:26:04.481 ************************************ 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:04.481 * Looking for test storage... 00:26:04.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.481 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.481 --rc genhtml_branch_coverage=1 00:26:04.481 --rc genhtml_function_coverage=1 00:26:04.482 --rc genhtml_legend=1 00:26:04.482 --rc geninfo_all_blocks=1 00:26:04.482 --rc geninfo_unexecuted_blocks=1 00:26:04.482 00:26:04.482 ' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.482 --rc genhtml_branch_coverage=1 00:26:04.482 --rc genhtml_function_coverage=1 00:26:04.482 --rc genhtml_legend=1 00:26:04.482 --rc geninfo_all_blocks=1 00:26:04.482 --rc geninfo_unexecuted_blocks=1 00:26:04.482 00:26:04.482 ' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.482 --rc genhtml_branch_coverage=1 00:26:04.482 --rc genhtml_function_coverage=1 00:26:04.482 --rc genhtml_legend=1 00:26:04.482 --rc geninfo_all_blocks=1 00:26:04.482 --rc geninfo_unexecuted_blocks=1 00:26:04.482 00:26:04.482 ' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.482 --rc genhtml_branch_coverage=1 00:26:04.482 --rc genhtml_function_coverage=1 00:26:04.482 --rc genhtml_legend=1 00:26:04.482 --rc geninfo_all_blocks=1 00:26:04.482 --rc geninfo_unexecuted_blocks=1 00:26:04.482 00:26:04.482 ' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3841cf7c768f4b69a24e1cd6eebb3802 00:26:04.482 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.483 17:41:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.620 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:12.621 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:12.621 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:12.621 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:12.621 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:26:12.621 00:26:12.621 --- 10.0.0.2 ping statistics --- 00:26:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.621 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:26:12.621 00:26:12.621 --- 10.0.0.1 ping statistics --- 00:26:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.621 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.621 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1695673 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1695673 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1695673 ']' 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.622 17:42:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.622 [2024-12-06 17:42:03.816014] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:26:12.622 [2024-12-06 17:42:03.816085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.622 [2024-12-06 17:42:03.917252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.622 [2024-12-06 17:42:03.968082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.622 [2024-12-06 17:42:03.968139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.622 [2024-12-06 17:42:03.968148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.622 [2024-12-06 17:42:03.968155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.622 [2024-12-06 17:42:03.968161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.622 [2024-12-06 17:42:03.968923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.622 [2024-12-06 17:42:04.672346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.622 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.884 null0 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3841cf7c768f4b69a24e1cd6eebb3802 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:12.884 [2024-12-06 17:42:04.732709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.884 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 nvme0n1 00:26:13.146 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.146 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:13.146 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.146 17:42:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 [ 00:26:13.146 { 00:26:13.146 "name": "nvme0n1", 00:26:13.146 "aliases": [ 00:26:13.146 "3841cf7c-768f-4b69-a24e-1cd6eebb3802" 00:26:13.146 ], 00:26:13.146 "product_name": "NVMe disk", 00:26:13.146 "block_size": 512, 00:26:13.146 "num_blocks": 2097152, 00:26:13.146 "uuid": "3841cf7c-768f-4b69-a24e-1cd6eebb3802", 00:26:13.146 "numa_id": 0, 00:26:13.146 "assigned_rate_limits": { 00:26:13.146 "rw_ios_per_sec": 0, 00:26:13.146 "rw_mbytes_per_sec": 0, 00:26:13.146 "r_mbytes_per_sec": 0, 00:26:13.146 "w_mbytes_per_sec": 0 00:26:13.146 }, 00:26:13.146 "claimed": false, 00:26:13.146 "zoned": false, 00:26:13.146 "supported_io_types": { 00:26:13.146 "read": true, 00:26:13.146 "write": true, 00:26:13.146 "unmap": false, 00:26:13.146 "flush": true, 00:26:13.146 "reset": true, 00:26:13.146 "nvme_admin": true, 00:26:13.146 "nvme_io": true, 00:26:13.146 "nvme_io_md": false, 00:26:13.146 "write_zeroes": true, 00:26:13.146 "zcopy": false, 00:26:13.146 "get_zone_info": false, 00:26:13.146 "zone_management": false, 00:26:13.146 "zone_append": false, 00:26:13.146 "compare": true, 00:26:13.146 "compare_and_write": true, 00:26:13.146 "abort": true, 00:26:13.146 "seek_hole": false, 00:26:13.146 "seek_data": false, 00:26:13.146 "copy": true, 00:26:13.146 "nvme_iov_md": false 00:26:13.146 }, 00:26:13.146 "memory_domains": [ 00:26:13.146 { 00:26:13.146 "dma_device_id": "system", 00:26:13.146 "dma_device_type": 1 00:26:13.146 } 00:26:13.146 ], 00:26:13.146 "driver_specific": { 00:26:13.146 "nvme": [ 00:26:13.146 { 00:26:13.146 "trid": { 00:26:13.146 "trtype": "TCP", 00:26:13.146 "adrfam": "IPv4", 00:26:13.146 "traddr": "10.0.0.2", 00:26:13.146 "trsvcid": "4420", 00:26:13.146 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:13.146 }, 00:26:13.146 "ctrlr_data": { 00:26:13.146 "cntlid": 1, 00:26:13.146 "vendor_id": "0x8086", 00:26:13.146 "model_number": "SPDK bdev Controller", 00:26:13.146 "serial_number": "00000000000000000000", 00:26:13.146 "firmware_revision": "25.01", 00:26:13.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.146 "oacs": { 00:26:13.146 "security": 0, 00:26:13.146 "format": 0, 00:26:13.146 "firmware": 0, 00:26:13.146 "ns_manage": 0 00:26:13.146 }, 00:26:13.146 "multi_ctrlr": true, 00:26:13.146 "ana_reporting": false 00:26:13.146 }, 00:26:13.146 "vs": { 00:26:13.146 "nvme_version": "1.3" 00:26:13.146 }, 00:26:13.146 "ns_data": { 00:26:13.146 "id": 1, 00:26:13.146 "can_share": true 00:26:13.146 } 00:26:13.146 } 00:26:13.146 ], 00:26:13.146 "mp_policy": "active_passive" 00:26:13.146 } 00:26:13.146 } 00:26:13.146 ] 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 [2024-12-06 17:42:05.010456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:13.146 [2024-12-06 17:42:05.010550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228c880 (9): Bad file descriptor 00:26:13.146 [2024-12-06 17:42:05.142755] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.146 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 [ 00:26:13.146 { 00:26:13.146 "name": "nvme0n1", 00:26:13.146 "aliases": [ 00:26:13.146 "3841cf7c-768f-4b69-a24e-1cd6eebb3802" 00:26:13.146 ], 00:26:13.146 "product_name": "NVMe disk", 00:26:13.146 "block_size": 512, 00:26:13.146 "num_blocks": 2097152, 00:26:13.146 "uuid": "3841cf7c-768f-4b69-a24e-1cd6eebb3802", 00:26:13.146 "numa_id": 0, 00:26:13.146 "assigned_rate_limits": { 00:26:13.146 "rw_ios_per_sec": 0, 00:26:13.146 "rw_mbytes_per_sec": 0, 00:26:13.146 "r_mbytes_per_sec": 0, 00:26:13.146 "w_mbytes_per_sec": 0 00:26:13.146 }, 00:26:13.146 "claimed": false, 00:26:13.146 "zoned": false, 00:26:13.146 "supported_io_types": { 00:26:13.146 "read": true, 00:26:13.146 "write": true, 00:26:13.146 "unmap": false, 00:26:13.146 "flush": true, 00:26:13.146 "reset": true, 00:26:13.146 "nvme_admin": true, 00:26:13.146 "nvme_io": true, 00:26:13.146 "nvme_io_md": false, 00:26:13.146 "write_zeroes": true, 00:26:13.146 "zcopy": false, 00:26:13.146 "get_zone_info": false, 00:26:13.146 "zone_management": false, 00:26:13.146 "zone_append": false, 00:26:13.146 "compare": true, 00:26:13.146 "compare_and_write": true, 00:26:13.146 "abort": true, 00:26:13.146 "seek_hole": false, 00:26:13.146 "seek_data": false, 00:26:13.146 "copy": true, 00:26:13.146 "nvme_iov_md": false 00:26:13.146 }, 00:26:13.147 "memory_domains": [ 00:26:13.147 { 00:26:13.147 "dma_device_id": "system", 00:26:13.147 "dma_device_type": 1 00:26:13.147 } 00:26:13.147 ], 00:26:13.147 "driver_specific": { 00:26:13.147 "nvme": [ 00:26:13.147 { 00:26:13.147 "trid": { 00:26:13.147 "trtype": "TCP", 00:26:13.147 "adrfam": "IPv4", 00:26:13.147 "traddr": "10.0.0.2", 00:26:13.147 "trsvcid": "4420", 00:26:13.147 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:13.147 }, 00:26:13.147 "ctrlr_data": { 00:26:13.147 "cntlid": 2, 00:26:13.147 "vendor_id": "0x8086", 00:26:13.147 "model_number": "SPDK bdev Controller", 00:26:13.147 "serial_number": "00000000000000000000", 00:26:13.147 "firmware_revision": "25.01", 00:26:13.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.147 "oacs": { 00:26:13.147 "security": 0, 00:26:13.147 "format": 0, 00:26:13.147 "firmware": 0, 00:26:13.147 "ns_manage": 0 00:26:13.147 }, 00:26:13.147 "multi_ctrlr": true, 00:26:13.147 "ana_reporting": false 00:26:13.147 }, 00:26:13.147 "vs": { 00:26:13.147 "nvme_version": "1.3" 00:26:13.147 }, 00:26:13.147 "ns_data": { 00:26:13.147 "id": 1, 00:26:13.147 "can_share": true 00:26:13.147 } 00:26:13.147 } 00:26:13.147 ], 00:26:13.147 "mp_policy": "active_passive" 00:26:13.147 } 00:26:13.147 } 00:26:13.147 ] 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Xbig5BwvE3 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Xbig5BwvE3 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Xbig5BwvE3 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.147 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.408 [2024-12-06 17:42:05.231212] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:13.408 [2024-12-06 17:42:05.231375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.408 [2024-12-06 17:42:05.255289] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:13.408 nvme0n1 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.408 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.408 [ 00:26:13.408 { 00:26:13.408 "name": "nvme0n1", 00:26:13.408 "aliases": [ 00:26:13.408 "3841cf7c-768f-4b69-a24e-1cd6eebb3802" 00:26:13.408 ], 00:26:13.408 "product_name": "NVMe disk", 00:26:13.408 "block_size": 512, 00:26:13.408 "num_blocks": 2097152, 00:26:13.408 "uuid": "3841cf7c-768f-4b69-a24e-1cd6eebb3802", 00:26:13.408 "numa_id": 0, 00:26:13.408 "assigned_rate_limits": { 00:26:13.408 "rw_ios_per_sec": 0, 00:26:13.408 "rw_mbytes_per_sec": 0, 00:26:13.408 "r_mbytes_per_sec": 0, 00:26:13.408 "w_mbytes_per_sec": 0 00:26:13.408 }, 00:26:13.408 "claimed": false, 00:26:13.408 "zoned": false, 00:26:13.408 "supported_io_types": { 00:26:13.408 "read": true, 00:26:13.408 "write": true, 00:26:13.408 "unmap": false, 00:26:13.408 "flush": true, 00:26:13.408 "reset": true, 00:26:13.408 "nvme_admin": true, 00:26:13.408 "nvme_io": true, 00:26:13.408 "nvme_io_md": false, 00:26:13.408 "write_zeroes": true, 00:26:13.408 "zcopy": false, 00:26:13.408 "get_zone_info": false, 00:26:13.408 "zone_management": false, 00:26:13.408 "zone_append": false, 00:26:13.408 "compare": true, 00:26:13.408 "compare_and_write": true, 00:26:13.408 "abort": true, 00:26:13.408 "seek_hole": false, 00:26:13.408 "seek_data": false, 00:26:13.408 "copy": true, 00:26:13.408 "nvme_iov_md": false 00:26:13.408 }, 00:26:13.408 "memory_domains": [ 00:26:13.408 { 00:26:13.408 "dma_device_id": "system", 00:26:13.408 "dma_device_type": 1 00:26:13.408 } 00:26:13.408 ], 00:26:13.408 "driver_specific": { 00:26:13.408 "nvme": [ 00:26:13.408 { 00:26:13.408 "trid": { 00:26:13.408 "trtype": "TCP", 00:26:13.408 "adrfam": "IPv4", 00:26:13.408 "traddr": "10.0.0.2", 00:26:13.408 "trsvcid": "4421", 00:26:13.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:13.408 }, 00:26:13.408 "ctrlr_data": { 00:26:13.408 "cntlid": 3, 00:26:13.408 "vendor_id": "0x8086", 00:26:13.408 "model_number": "SPDK bdev Controller", 00:26:13.408 "serial_number": "00000000000000000000", 00:26:13.408 "firmware_revision": "25.01", 00:26:13.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.409 "oacs": { 00:26:13.409 "security": 0, 00:26:13.409 "format": 0, 00:26:13.409 "firmware": 0, 00:26:13.409 "ns_manage": 0 00:26:13.409 }, 00:26:13.409 "multi_ctrlr": true, 00:26:13.409 "ana_reporting": false 00:26:13.409 }, 00:26:13.409 "vs": { 00:26:13.409 "nvme_version": "1.3" 00:26:13.409 }, 00:26:13.409 "ns_data": { 00:26:13.409 "id": 1, 00:26:13.409 "can_share": true 00:26:13.409 } 00:26:13.409 } 00:26:13.409 ], 00:26:13.409 "mp_policy": "active_passive" 00:26:13.409 } 00:26:13.409 } 00:26:13.409 ] 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Xbig5BwvE3 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.409 rmmod nvme_tcp 00:26:13.409 rmmod nvme_fabrics 00:26:13.409 rmmod nvme_keyring 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1695673 ']' 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1695673 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1695673 ']' 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1695673 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.409 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1695673 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1695673' 00:26:13.669 killing process with pid 1695673 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1695673 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1695673 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.669 17:42:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:16.216 00:26:16.216 real 0m11.745s 00:26:16.216 user 0m4.270s 00:26:16.216 sys 0m6.063s 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.216 ************************************ 00:26:16.216 END TEST nvmf_async_init 00:26:16.216 ************************************ 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.216 ************************************ 00:26:16.216 START TEST dma 00:26:16.216 ************************************ 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:16.216 * Looking for test storage... 00:26:16.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:26:16.216 17:42:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:16.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.216 --rc genhtml_branch_coverage=1 00:26:16.216 --rc genhtml_function_coverage=1 00:26:16.216 --rc genhtml_legend=1 00:26:16.216 --rc geninfo_all_blocks=1 00:26:16.216 --rc geninfo_unexecuted_blocks=1 00:26:16.216 00:26:16.216 ' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:16.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.216 --rc genhtml_branch_coverage=1 00:26:16.216 --rc genhtml_function_coverage=1 00:26:16.216 --rc genhtml_legend=1 00:26:16.216 --rc geninfo_all_blocks=1 00:26:16.216 --rc geninfo_unexecuted_blocks=1 00:26:16.216 00:26:16.216 ' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:16.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.216 --rc genhtml_branch_coverage=1 00:26:16.216 --rc genhtml_function_coverage=1 00:26:16.216 --rc genhtml_legend=1 00:26:16.216 --rc geninfo_all_blocks=1 00:26:16.216 --rc geninfo_unexecuted_blocks=1 00:26:16.216 00:26:16.216 ' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:16.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.216 --rc genhtml_branch_coverage=1 00:26:16.216 --rc genhtml_function_coverage=1 00:26:16.216 --rc genhtml_legend=1 00:26:16.216 --rc geninfo_all_blocks=1 00:26:16.216 --rc geninfo_unexecuted_blocks=1 00:26:16.216 00:26:16.216 ' 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.216 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:16.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:16.217 00:26:16.217 real 0m0.224s 00:26:16.217 user 0m0.137s 00:26:16.217 sys 0m0.102s 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:16.217 ************************************ 00:26:16.217 END TEST dma 00:26:16.217 ************************************ 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.217 ************************************ 00:26:16.217 START TEST nvmf_identify 00:26:16.217 ************************************ 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:16.217 * Looking for test storage... 00:26:16.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:26:16.217 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.479 --rc genhtml_branch_coverage=1 00:26:16.479 --rc genhtml_function_coverage=1 00:26:16.479 --rc genhtml_legend=1 00:26:16.479 --rc geninfo_all_blocks=1 00:26:16.479 --rc geninfo_unexecuted_blocks=1 00:26:16.479 00:26:16.479 ' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.479 --rc genhtml_branch_coverage=1 00:26:16.479 --rc genhtml_function_coverage=1 00:26:16.479 --rc genhtml_legend=1 00:26:16.479 --rc geninfo_all_blocks=1 00:26:16.479 --rc geninfo_unexecuted_blocks=1 00:26:16.479 00:26:16.479 ' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.479 --rc genhtml_branch_coverage=1 00:26:16.479 --rc genhtml_function_coverage=1 00:26:16.479 --rc genhtml_legend=1 00:26:16.479 --rc geninfo_all_blocks=1 00:26:16.479 --rc geninfo_unexecuted_blocks=1 00:26:16.479 00:26:16.479 ' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.479 --rc genhtml_branch_coverage=1 00:26:16.479 --rc genhtml_function_coverage=1 00:26:16.479 --rc genhtml_legend=1 00:26:16.479 --rc geninfo_all_blocks=1 00:26:16.479 --rc geninfo_unexecuted_blocks=1 00:26:16.479 00:26:16.479 ' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:16.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:16.479 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:16.480 17:42:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:24.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:24.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:24.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.769 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:24.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:24.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:26:24.770 00:26:24.770 --- 10.0.0.2 ping statistics --- 00:26:24.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.770 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:26:24.770 00:26:24.770 --- 10.0.0.1 ping statistics --- 00:26:24.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.770 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1698679 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1698679 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1698679 ']' 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.770 17:42:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:24.770 [2024-12-06 17:42:15.984127] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:26:24.770 [2024-12-06 17:42:15.984195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.770 [2024-12-06 17:42:16.082853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.770 [2024-12-06 17:42:16.137387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.770 [2024-12-06 17:42:16.137442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.770 [2024-12-06 17:42:16.137451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.770 [2024-12-06 17:42:16.137459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.770 [2024-12-06 17:42:16.137465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.770 [2024-12-06 17:42:16.139449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.770 [2024-12-06 17:42:16.139605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.770 [2024-12-06 17:42:16.139766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:24.770 [2024-12-06 17:42:16.139767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:24.770 [2024-12-06 17:42:16.821072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.770 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 Malloc0 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 [2024-12-06 17:42:16.940410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.031 [ 00:26:25.031 { 00:26:25.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:25.031 "subtype": "Discovery", 00:26:25.031 "listen_addresses": [ 00:26:25.031 { 00:26:25.031 "trtype": "TCP", 00:26:25.031 "adrfam": "IPv4", 00:26:25.031 "traddr": "10.0.0.2", 00:26:25.031 "trsvcid": "4420" 00:26:25.031 } 00:26:25.031 ], 00:26:25.031 "allow_any_host": true, 00:26:25.031 "hosts": [] 00:26:25.031 }, 00:26:25.031 { 00:26:25.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.031 "subtype": "NVMe", 00:26:25.031 "listen_addresses": [ 00:26:25.031 { 00:26:25.031 "trtype": "TCP", 00:26:25.031 "adrfam": "IPv4", 00:26:25.031 "traddr": "10.0.0.2", 00:26:25.031 "trsvcid": "4420" 00:26:25.031 } 00:26:25.031 ], 00:26:25.031 "allow_any_host": true, 00:26:25.031 "hosts": [], 00:26:25.031 "serial_number": "SPDK00000000000001", 00:26:25.031 "model_number": "SPDK bdev Controller", 00:26:25.031 "max_namespaces": 32, 00:26:25.031 "min_cntlid": 1, 00:26:25.031 "max_cntlid": 65519, 00:26:25.031 "namespaces": [ 00:26:25.031 { 00:26:25.031 "nsid": 1, 00:26:25.031 "bdev_name": "Malloc0", 00:26:25.031 "name": "Malloc0", 00:26:25.031 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:25.031 "eui64": "ABCDEF0123456789", 00:26:25.031 "uuid": "a2f9dc94-fd6c-4489-966d-e5b08b1eee03" 00:26:25.031 } 00:26:25.031 ] 00:26:25.031 } 00:26:25.031 ] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.031 17:42:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:25.031 [2024-12-06 17:42:17.006159] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:26:25.031 [2024-12-06 17:42:17.006230] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698716 ] 00:26:25.031 [2024-12-06 17:42:17.064849] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:25.031 [2024-12-06 17:42:17.064925] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:25.032 [2024-12-06 17:42:17.064931] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:25.032 [2024-12-06 17:42:17.064951] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:25.032 [2024-12-06 17:42:17.064963] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:25.032 [2024-12-06 17:42:17.069067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:25.032 [2024-12-06 17:42:17.069121] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x541690 0 00:26:25.032 [2024-12-06 17:42:17.076655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:25.032 [2024-12-06 17:42:17.076674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:25.032 [2024-12-06 17:42:17.076680] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:25.032 [2024-12-06 17:42:17.076683] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:25.032 [2024-12-06 17:42:17.076732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.076739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.076744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.076763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:25.032 [2024-12-06 17:42:17.076789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.087652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.087665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.087669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.087675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.087688] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:25.032 [2024-12-06 17:42:17.087697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:25.032 [2024-12-06 17:42:17.087703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:25.032 [2024-12-06 17:42:17.087721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.087725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.087729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.087738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.087756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.087963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.087970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.087973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.087978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.087984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:25.032 [2024-12-06 17:42:17.087992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:25.032 [2024-12-06 17:42:17.087999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.088013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.088025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.088223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.088229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.088233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.088247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:25.032 [2024-12-06 17:42:17.088257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:25.032 [2024-12-06 17:42:17.088264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.088278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.088289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.088482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.088488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.088492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.088501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:25.032 [2024-12-06 17:42:17.088511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.088526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.088536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.088733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.088740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.088744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.088753] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:25.032 [2024-12-06 17:42:17.088758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:25.032 [2024-12-06 17:42:17.088767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:25.032 [2024-12-06 17:42:17.088880] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:25.032 [2024-12-06 17:42:17.088885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:25.032 [2024-12-06 17:42:17.088895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.088902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.088909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.088920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.089104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.089113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.089117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.089121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.089126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:25.032 [2024-12-06 17:42:17.089136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.089140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.089143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.089150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.089161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.089341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.032 [2024-12-06 17:42:17.089347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.032 [2024-12-06 17:42:17.089351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.089355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.032 [2024-12-06 17:42:17.089359] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:25.032 [2024-12-06 17:42:17.089364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:25.032 [2024-12-06 17:42:17.089373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:25.032 [2024-12-06 17:42:17.089381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:25.032 [2024-12-06 17:42:17.089391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.032 [2024-12-06 17:42:17.089395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.032 [2024-12-06 17:42:17.089402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.032 [2024-12-06 17:42:17.089413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.032 [2024-12-06 17:42:17.089634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.033 [2024-12-06 17:42:17.089648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.033 [2024-12-06 17:42:17.089652] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.033 [2024-12-06 17:42:17.089656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x541690): datao=0, datal=4096, cccid=0 00:26:25.033 [2024-12-06 17:42:17.089662] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a3100) on tqpair(0x541690): expected_datao=0, payload_size=4096 00:26:25.033 [2024-12-06 17:42:17.089667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.033 [2024-12-06 17:42:17.089683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.033 [2024-12-06 17:42:17.089688] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.130837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.297 [2024-12-06 17:42:17.130855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.297 [2024-12-06 17:42:17.130859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.130864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.297 [2024-12-06 17:42:17.130877] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:25.297 [2024-12-06 17:42:17.130894] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:25.297 [2024-12-06 17:42:17.130899] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:25.297 [2024-12-06 17:42:17.130906] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:25.297 [2024-12-06 17:42:17.130911] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:25.297 [2024-12-06 17:42:17.130917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:25.297 [2024-12-06 17:42:17.130928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:25.297 [2024-12-06 17:42:17.130937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.130941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.130946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.130956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.297 [2024-12-06 17:42:17.130972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.297 [2024-12-06 17:42:17.131128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.297 [2024-12-06 17:42:17.131136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.297 [2024-12-06 17:42:17.131140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.297 [2024-12-06 17:42:17.131154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.131168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.297 [2024-12-06 17:42:17.131175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.131188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.297 [2024-12-06 17:42:17.131195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.131208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.297 [2024-12-06 17:42:17.131216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.131229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.297 [2024-12-06 17:42:17.131234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:25.297 [2024-12-06 17:42:17.131250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:25.297 [2024-12-06 17:42:17.131258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.131269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.297 [2024-12-06 17:42:17.131281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3100, cid 0, qid 0 00:26:25.297 [2024-12-06 17:42:17.131287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3280, cid 1, qid 0 00:26:25.297 [2024-12-06 17:42:17.131291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3400, cid 2, qid 0 00:26:25.297 [2024-12-06 17:42:17.131297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.297 [2024-12-06 17:42:17.131302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3700, cid 4, qid 0 00:26:25.297 [2024-12-06 17:42:17.131568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.297 [2024-12-06 17:42:17.131575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.297 [2024-12-06 17:42:17.131579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3700) on tqpair=0x541690 00:26:25.297 [2024-12-06 17:42:17.131589] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:25.297 [2024-12-06 17:42:17.131594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:25.297 [2024-12-06 17:42:17.131607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.131612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.131618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.297 [2024-12-06 17:42:17.131630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3700, cid 4, qid 0 00:26:25.297 [2024-12-06 17:42:17.135651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.297 [2024-12-06 17:42:17.135659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.297 [2024-12-06 17:42:17.135663] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135667] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x541690): datao=0, datal=4096, cccid=4 00:26:25.297 [2024-12-06 17:42:17.135672] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a3700) on tqpair(0x541690): expected_datao=0, payload_size=4096 00:26:25.297 [2024-12-06 17:42:17.135677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135684] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135688] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.297 [2024-12-06 17:42:17.135701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.297 [2024-12-06 17:42:17.135704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3700) on tqpair=0x541690 00:26:25.297 [2024-12-06 17:42:17.135723] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:25.297 [2024-12-06 17:42:17.135753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.135768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.297 [2024-12-06 17:42:17.135776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.135784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.135790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.297 [2024-12-06 17:42:17.135806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3700, cid 4, qid 0 00:26:25.297 [2024-12-06 17:42:17.135812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3880, cid 5, qid 0 00:26:25.297 [2024-12-06 17:42:17.136085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.297 [2024-12-06 17:42:17.136091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.297 [2024-12-06 17:42:17.136095] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.136099] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x541690): datao=0, datal=1024, cccid=4 00:26:25.297 [2024-12-06 17:42:17.136104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a3700) on tqpair(0x541690): expected_datao=0, payload_size=1024 00:26:25.297 [2024-12-06 17:42:17.136108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.136115] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.136119] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.136124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.297 [2024-12-06 17:42:17.136130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.297 [2024-12-06 17:42:17.136134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.136138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3880) on tqpair=0x541690 00:26:25.297 [2024-12-06 17:42:17.178652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.297 [2024-12-06 17:42:17.178664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.297 [2024-12-06 17:42:17.178667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.178671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3700) on tqpair=0x541690 00:26:25.297 [2024-12-06 17:42:17.178685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.297 [2024-12-06 17:42:17.178689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x541690) 00:26:25.297 [2024-12-06 17:42:17.178696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.298 [2024-12-06 17:42:17.178713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3700, cid 4, qid 0 00:26:25.298 [2024-12-06 17:42:17.178960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.298 [2024-12-06 17:42:17.178967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.298 [2024-12-06 17:42:17.178971] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.178974] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x541690): datao=0, datal=3072, cccid=4 00:26:25.298 [2024-12-06 17:42:17.178979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a3700) on tqpair(0x541690): expected_datao=0, payload_size=3072 00:26:25.298 [2024-12-06 17:42:17.178984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179005] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179010] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.298 [2024-12-06 17:42:17.179174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.298 [2024-12-06 17:42:17.179181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3700) on tqpair=0x541690 00:26:25.298 [2024-12-06 17:42:17.179194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x541690) 00:26:25.298 [2024-12-06 17:42:17.179204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.298 [2024-12-06 17:42:17.179218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3700, cid 4, qid 0 00:26:25.298 [2024-12-06 17:42:17.179443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.298 [2024-12-06 17:42:17.179449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.298 [2024-12-06 17:42:17.179453] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179456] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x541690): datao=0, datal=8, cccid=4 00:26:25.298 [2024-12-06 17:42:17.179461] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a3700) on tqpair(0x541690): expected_datao=0, payload_size=8 00:26:25.298 [2024-12-06 17:42:17.179465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179471] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.179475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.219844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.298 [2024-12-06 17:42:17.219855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.298 [2024-12-06 17:42:17.219859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.298 [2024-12-06 17:42:17.219863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3700) on tqpair=0x541690 00:26:25.298 ===================================================== 00:26:25.298 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:25.298 ===================================================== 00:26:25.298 Controller Capabilities/Features 00:26:25.298 ================================ 00:26:25.298 Vendor ID: 0000 00:26:25.298 Subsystem Vendor ID: 0000 00:26:25.298 Serial Number: .................... 00:26:25.298 Model Number: ........................................ 00:26:25.298 Firmware Version: 25.01 00:26:25.298 Recommended Arb Burst: 0 00:26:25.298 IEEE OUI Identifier: 00 00 00 00:26:25.298 Multi-path I/O 00:26:25.298 May have multiple subsystem ports: No 00:26:25.298 May have multiple controllers: No 00:26:25.298 Associated with SR-IOV VF: No 00:26:25.298 Max Data Transfer Size: 131072 00:26:25.298 Max Number of Namespaces: 0 00:26:25.298 Max Number of I/O Queues: 1024 00:26:25.298 NVMe Specification Version (VS): 1.3 00:26:25.298 NVMe Specification Version (Identify): 1.3 00:26:25.298 Maximum Queue Entries: 128 00:26:25.298 Contiguous Queues Required: Yes 00:26:25.298 Arbitration Mechanisms Supported 00:26:25.298 Weighted Round Robin: Not Supported 00:26:25.298 Vendor Specific: Not Supported 00:26:25.298 Reset Timeout: 15000 ms 00:26:25.298 Doorbell Stride: 4 bytes 00:26:25.298 NVM Subsystem Reset: Not Supported 00:26:25.298 Command Sets Supported 00:26:25.298 NVM Command Set: Supported 00:26:25.298 Boot Partition: Not Supported 00:26:25.298 Memory Page Size Minimum: 4096 bytes 00:26:25.298 Memory Page Size Maximum: 4096 bytes 00:26:25.298 Persistent Memory Region: Not Supported 00:26:25.298 Optional Asynchronous Events Supported 00:26:25.298 Namespace Attribute Notices: Not Supported 00:26:25.298 Firmware Activation Notices: Not Supported 00:26:25.298 ANA Change Notices: Not Supported 00:26:25.298 PLE Aggregate Log Change Notices: Not Supported 00:26:25.298 LBA Status Info Alert Notices: Not Supported 00:26:25.298 EGE Aggregate Log Change Notices: Not Supported 00:26:25.298 Normal NVM Subsystem Shutdown event: Not Supported 00:26:25.298 Zone Descriptor Change Notices: Not Supported 00:26:25.298 Discovery Log Change Notices: Supported 00:26:25.298 Controller Attributes 00:26:25.298 128-bit Host Identifier: Not Supported 00:26:25.298 Non-Operational Permissive Mode: Not Supported 00:26:25.298 NVM Sets: Not Supported 00:26:25.298 Read Recovery Levels: Not Supported 00:26:25.298 Endurance Groups: Not Supported 00:26:25.298 Predictable Latency Mode: Not Supported 00:26:25.298 Traffic Based Keep ALive: Not Supported 00:26:25.298 Namespace Granularity: Not Supported 00:26:25.298 SQ Associations: Not Supported 00:26:25.298 UUID List: Not Supported 00:26:25.298 Multi-Domain Subsystem: Not Supported 00:26:25.298 Fixed Capacity Management: Not Supported 00:26:25.298 Variable Capacity Management: Not Supported 00:26:25.298 Delete Endurance Group: Not Supported 00:26:25.298 Delete NVM Set: Not Supported 00:26:25.298 Extended LBA Formats Supported: Not Supported 00:26:25.298 Flexible Data Placement Supported: Not Supported 00:26:25.298 00:26:25.298 Controller Memory Buffer Support 00:26:25.298 ================================ 00:26:25.298 Supported: No 00:26:25.298 00:26:25.298 Persistent Memory Region Support 00:26:25.298 ================================ 00:26:25.298 Supported: No 00:26:25.298 00:26:25.298 Admin Command Set Attributes 00:26:25.298 ============================ 00:26:25.298 Security Send/Receive: Not Supported 00:26:25.298 Format NVM: Not Supported 00:26:25.298 Firmware Activate/Download: Not Supported 00:26:25.298 Namespace Management: Not Supported 00:26:25.298 Device Self-Test: Not Supported 00:26:25.298 Directives: Not Supported 00:26:25.298 NVMe-MI: Not Supported 00:26:25.298 Virtualization Management: Not Supported 00:26:25.298 Doorbell Buffer Config: Not Supported 00:26:25.298 Get LBA Status Capability: Not Supported 00:26:25.298 Command & Feature Lockdown Capability: Not Supported 00:26:25.298 Abort Command Limit: 1 00:26:25.298 Async Event Request Limit: 4 00:26:25.298 Number of Firmware Slots: N/A 00:26:25.298 Firmware Slot 1 Read-Only: N/A 00:26:25.298 Firmware Activation Without Reset: N/A 00:26:25.298 Multiple Update Detection Support: N/A 00:26:25.298 Firmware Update Granularity: No Information Provided 00:26:25.298 Per-Namespace SMART Log: No 00:26:25.298 Asymmetric Namespace Access Log Page: Not Supported 00:26:25.298 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:25.298 Command Effects Log Page: Not Supported 00:26:25.298 Get Log Page Extended Data: Supported 00:26:25.298 Telemetry Log Pages: Not Supported 00:26:25.298 Persistent Event Log Pages: Not Supported 00:26:25.298 Supported Log Pages Log Page: May Support 00:26:25.298 Commands Supported & Effects Log Page: Not Supported 00:26:25.298 Feature Identifiers & Effects Log Page:May Support 00:26:25.298 NVMe-MI Commands & Effects Log Page: May Support 00:26:25.298 Data Area 4 for Telemetry Log: Not Supported 00:26:25.298 Error Log Page Entries Supported: 128 00:26:25.298 Keep Alive: Not Supported 00:26:25.298 00:26:25.298 NVM Command Set Attributes 00:26:25.298 ========================== 00:26:25.298 Submission Queue Entry Size 00:26:25.298 Max: 1 00:26:25.298 Min: 1 00:26:25.298 Completion Queue Entry Size 00:26:25.298 Max: 1 00:26:25.298 Min: 1 00:26:25.298 Number of Namespaces: 0 00:26:25.298 Compare Command: Not Supported 00:26:25.298 Write Uncorrectable Command: Not Supported 00:26:25.298 Dataset Management Command: Not Supported 00:26:25.298 Write Zeroes Command: Not Supported 00:26:25.298 Set Features Save Field: Not Supported 00:26:25.298 Reservations: Not Supported 00:26:25.298 Timestamp: Not Supported 00:26:25.298 Copy: Not Supported 00:26:25.298 Volatile Write Cache: Not Present 00:26:25.298 Atomic Write Unit (Normal): 1 00:26:25.298 Atomic Write Unit (PFail): 1 00:26:25.298 Atomic Compare & Write Unit: 1 00:26:25.298 Fused Compare & Write: Supported 00:26:25.298 Scatter-Gather List 00:26:25.298 SGL Command Set: Supported 00:26:25.298 SGL Keyed: Supported 00:26:25.298 SGL Bit Bucket Descriptor: Not Supported 00:26:25.298 SGL Metadata Pointer: Not Supported 00:26:25.298 Oversized SGL: Not Supported 00:26:25.298 SGL Metadata Address: Not Supported 00:26:25.298 SGL Offset: Supported 00:26:25.298 Transport SGL Data Block: Not Supported 00:26:25.298 Replay Protected Memory Block: Not Supported 00:26:25.298 00:26:25.298 Firmware Slot Information 00:26:25.298 ========================= 00:26:25.298 Active slot: 0 00:26:25.298 00:26:25.298 00:26:25.298 Error Log 00:26:25.298 ========= 00:26:25.299 00:26:25.299 Active Namespaces 00:26:25.299 ================= 00:26:25.299 Discovery Log Page 00:26:25.299 ================== 00:26:25.299 Generation Counter: 2 00:26:25.299 Number of Records: 2 00:26:25.299 Record Format: 0 00:26:25.299 00:26:25.299 Discovery Log Entry 0 00:26:25.299 ---------------------- 00:26:25.299 Transport Type: 3 (TCP) 00:26:25.299 Address Family: 1 (IPv4) 00:26:25.299 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:25.299 Entry Flags: 00:26:25.299 Duplicate Returned Information: 1 00:26:25.299 Explicit Persistent Connection Support for Discovery: 1 00:26:25.299 Transport Requirements: 00:26:25.299 Secure Channel: Not Required 00:26:25.299 Port ID: 0 (0x0000) 00:26:25.299 Controller ID: 65535 (0xffff) 00:26:25.299 Admin Max SQ Size: 128 00:26:25.299 Transport Service Identifier: 4420 00:26:25.299 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:25.299 Transport Address: 10.0.0.2 00:26:25.299 Discovery Log Entry 1 00:26:25.299 ---------------------- 00:26:25.299 Transport Type: 3 (TCP) 00:26:25.299 Address Family: 1 (IPv4) 00:26:25.299 Subsystem Type: 2 (NVM Subsystem) 00:26:25.299 Entry Flags: 00:26:25.299 Duplicate Returned Information: 0 00:26:25.299 Explicit Persistent Connection Support for Discovery: 0 00:26:25.299 Transport Requirements: 00:26:25.299 Secure Channel: Not Required 00:26:25.299 Port ID: 0 (0x0000) 00:26:25.299 Controller ID: 65535 (0xffff) 00:26:25.299 Admin Max SQ Size: 128 00:26:25.299 Transport Service Identifier: 4420 00:26:25.299 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:25.299 Transport Address: 10.0.0.2 [2024-12-06 17:42:17.219974] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:25.299 [2024-12-06 17:42:17.219987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3100) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.219994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.299 [2024-12-06 17:42:17.220000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3280) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.220005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.299 [2024-12-06 17:42:17.220010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3400) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.220015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.299 [2024-12-06 17:42:17.220020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.220024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.299 [2024-12-06 17:42:17.220037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.220053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.220069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.220347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.220354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.220360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.220371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.220385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.220400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.220651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.220658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.220661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.220671] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:25.299 [2024-12-06 17:42:17.220676] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:25.299 [2024-12-06 17:42:17.220686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.220700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.220711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.220879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.220885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.220889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.220903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.220911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.220918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.220928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.221151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.221157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.221161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.221165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.221175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.221178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.221182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.221189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.221199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.221405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.221412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.221415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.221419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.221429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.221433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.221436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.221443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.221454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.225649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.225657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.225661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.225665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.225675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.225679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.225683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x541690) 00:26:25.299 [2024-12-06 17:42:17.225690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.299 [2024-12-06 17:42:17.225702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a3580, cid 3, qid 0 00:26:25.299 [2024-12-06 17:42:17.225889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.299 [2024-12-06 17:42:17.225895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.299 [2024-12-06 17:42:17.225899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.299 [2024-12-06 17:42:17.225902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a3580) on tqpair=0x541690 00:26:25.299 [2024-12-06 17:42:17.225910] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:26:25.299 00:26:25.299 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:25.299 [2024-12-06 17:42:17.276479] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:26:25.300 [2024-12-06 17:42:17.276529] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698725 ] 00:26:25.300 [2024-12-06 17:42:17.333154] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:25.300 [2024-12-06 17:42:17.333217] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:25.300 [2024-12-06 17:42:17.333223] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:25.300 [2024-12-06 17:42:17.333240] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:25.300 [2024-12-06 17:42:17.333250] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:25.300 [2024-12-06 17:42:17.333930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:25.300 [2024-12-06 17:42:17.333974] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xda5690 0 00:26:25.300 [2024-12-06 17:42:17.339653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:25.300 [2024-12-06 17:42:17.339669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:25.300 [2024-12-06 17:42:17.339673] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:25.300 [2024-12-06 17:42:17.339677] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:25.300 [2024-12-06 17:42:17.339715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.339722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.339726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.339739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:25.300 [2024-12-06 17:42:17.339764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.347650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.347659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.347663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.347668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.347678] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:25.300 [2024-12-06 17:42:17.347686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:25.300 [2024-12-06 17:42:17.347691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:25.300 [2024-12-06 17:42:17.347706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.347710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.347714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.347722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.347736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.347921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.347928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.347931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.347935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.347941] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:25.300 [2024-12-06 17:42:17.347949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:25.300 [2024-12-06 17:42:17.347956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.347959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.347963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.347970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.347981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.348189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.348195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.348204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.348213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:25.300 [2024-12-06 17:42:17.348222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:25.300 [2024-12-06 17:42:17.348228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.348242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.348253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.348503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.348509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.348512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.348521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:25.300 [2024-12-06 17:42:17.348532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.348546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.348556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.348803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.348810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.348814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.348822] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:25.300 [2024-12-06 17:42:17.348828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:25.300 [2024-12-06 17:42:17.348836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:25.300 [2024-12-06 17:42:17.348945] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:25.300 [2024-12-06 17:42:17.348950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:25.300 [2024-12-06 17:42:17.348959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.348966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.348973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.348985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.349168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.349175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.349178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.349182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.349187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:25.300 [2024-12-06 17:42:17.349197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.349201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.349204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.349211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.349222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.349388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.300 [2024-12-06 17:42:17.349395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.300 [2024-12-06 17:42:17.349398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.349402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.300 [2024-12-06 17:42:17.349407] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:25.300 [2024-12-06 17:42:17.349412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:25.300 [2024-12-06 17:42:17.349419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:25.300 [2024-12-06 17:42:17.349431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:25.300 [2024-12-06 17:42:17.349440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.300 [2024-12-06 17:42:17.349444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.300 [2024-12-06 17:42:17.349451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.300 [2024-12-06 17:42:17.349461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.300 [2024-12-06 17:42:17.349728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.300 [2024-12-06 17:42:17.349735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.301 [2024-12-06 17:42:17.349739] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.301 [2024-12-06 17:42:17.349743] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=4096, cccid=0 00:26:25.301 [2024-12-06 17:42:17.349748] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07100) on tqpair(0xda5690): expected_datao=0, payload_size=4096 00:26:25.301 [2024-12-06 17:42:17.349752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.301 [2024-12-06 17:42:17.349765] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.301 [2024-12-06 17:42:17.349770] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.566 [2024-12-06 17:42:17.393665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.566 [2024-12-06 17:42:17.393669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.566 [2024-12-06 17:42:17.393686] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:25.566 [2024-12-06 17:42:17.393695] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:25.566 [2024-12-06 17:42:17.393700] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:25.566 [2024-12-06 17:42:17.393704] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:25.566 [2024-12-06 17:42:17.393709] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:25.566 [2024-12-06 17:42:17.393714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.393724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.393731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.393747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.566 [2024-12-06 17:42:17.393761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.566 [2024-12-06 17:42:17.393934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.566 [2024-12-06 17:42:17.393941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.566 [2024-12-06 17:42:17.393944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.566 [2024-12-06 17:42:17.393955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.393969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.566 [2024-12-06 17:42:17.393975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.393988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.566 [2024-12-06 17:42:17.393994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.393998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.394007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.566 [2024-12-06 17:42:17.394013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.394026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.566 [2024-12-06 17:42:17.394031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.394062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.566 [2024-12-06 17:42:17.394074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07100, cid 0, qid 0 00:26:25.566 [2024-12-06 17:42:17.394080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07280, cid 1, qid 0 00:26:25.566 [2024-12-06 17:42:17.394085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07400, cid 2, qid 0 00:26:25.566 [2024-12-06 17:42:17.394089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.566 [2024-12-06 17:42:17.394094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.566 [2024-12-06 17:42:17.394343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.566 [2024-12-06 17:42:17.394350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.566 [2024-12-06 17:42:17.394353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.566 [2024-12-06 17:42:17.394363] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:25.566 [2024-12-06 17:42:17.394368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.394404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.566 [2024-12-06 17:42:17.394415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.566 [2024-12-06 17:42:17.394647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.566 [2024-12-06 17:42:17.394654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.566 [2024-12-06 17:42:17.394657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.566 [2024-12-06 17:42:17.394730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.394748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.566 [2024-12-06 17:42:17.394759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.566 [2024-12-06 17:42:17.394770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.566 [2024-12-06 17:42:17.394959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.566 [2024-12-06 17:42:17.394966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.566 [2024-12-06 17:42:17.394970] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394974] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=4096, cccid=4 00:26:25.566 [2024-12-06 17:42:17.394979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07700) on tqpair(0xda5690): expected_datao=0, payload_size=4096 00:26:25.566 [2024-12-06 17:42:17.394983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394990] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.394994] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.395160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.566 [2024-12-06 17:42:17.395166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.566 [2024-12-06 17:42:17.395170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.395173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.566 [2024-12-06 17:42:17.395191] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:25.566 [2024-12-06 17:42:17.395203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.395213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:25.566 [2024-12-06 17:42:17.395220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.566 [2024-12-06 17:42:17.395224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.395230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.395241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.567 [2024-12-06 17:42:17.395494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.567 [2024-12-06 17:42:17.395501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.567 [2024-12-06 17:42:17.395504] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395508] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=4096, cccid=4 00:26:25.567 [2024-12-06 17:42:17.395512] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07700) on tqpair(0xda5690): expected_datao=0, payload_size=4096 00:26:25.567 [2024-12-06 17:42:17.395517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395527] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.395670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.395673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.395692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.395702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.395710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.395722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.395733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.567 [2024-12-06 17:42:17.395974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.567 [2024-12-06 17:42:17.395981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.567 [2024-12-06 17:42:17.395984] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.395988] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=4096, cccid=4 00:26:25.567 [2024-12-06 17:42:17.395992] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07700) on tqpair(0xda5690): expected_datao=0, payload_size=4096 00:26:25.567 [2024-12-06 17:42:17.395997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396003] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396007] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.396209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.396212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.396224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396268] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:25.567 [2024-12-06 17:42:17.396273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:25.567 [2024-12-06 17:42:17.396279] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:25.567 [2024-12-06 17:42:17.396296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.396306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.396313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.396327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.567 [2024-12-06 17:42:17.396340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.567 [2024-12-06 17:42:17.396347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07880, cid 5, qid 0 00:26:25.567 [2024-12-06 17:42:17.396543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.396550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.396553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.396564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.396570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.396573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07880) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.396586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.396597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.396607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07880, cid 5, qid 0 00:26:25.567 [2024-12-06 17:42:17.396821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.396828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.396831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07880) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.396844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.396848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.396854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.396865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07880, cid 5, qid 0 00:26:25.567 [2024-12-06 17:42:17.397074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.397081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.397084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07880) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.397097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.397107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.397117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07880, cid 5, qid 0 00:26:25.567 [2024-12-06 17:42:17.397300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.567 [2024-12-06 17:42:17.397306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.567 [2024-12-06 17:42:17.397310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07880) on tqpair=0xda5690 00:26:25.567 [2024-12-06 17:42:17.397331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.397342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.397351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.397361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.397369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.397379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.397386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.567 [2024-12-06 17:42:17.397390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xda5690) 00:26:25.567 [2024-12-06 17:42:17.397396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.567 [2024-12-06 17:42:17.397408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07880, cid 5, qid 0 00:26:25.567 [2024-12-06 17:42:17.397413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07700, cid 4, qid 0 00:26:25.567 [2024-12-06 17:42:17.397418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07a00, cid 6, qid 0 00:26:25.567 [2024-12-06 17:42:17.397423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07b80, cid 7, qid 0 00:26:25.567 [2024-12-06 17:42:17.397726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.567 [2024-12-06 17:42:17.397733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.568 [2024-12-06 17:42:17.397736] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397740] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=8192, cccid=5 00:26:25.568 [2024-12-06 17:42:17.397744] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07880) on tqpair(0xda5690): expected_datao=0, payload_size=8192 00:26:25.568 [2024-12-06 17:42:17.397749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397858] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.568 [2024-12-06 17:42:17.397875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.568 [2024-12-06 17:42:17.397878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397882] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=512, cccid=4 00:26:25.568 [2024-12-06 17:42:17.397886] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07700) on tqpair(0xda5690): expected_datao=0, payload_size=512 00:26:25.568 [2024-12-06 17:42:17.397890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397900] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.568 [2024-12-06 17:42:17.397912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.568 [2024-12-06 17:42:17.397915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397919] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=512, cccid=6 00:26:25.568 [2024-12-06 17:42:17.397923] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07a00) on tqpair(0xda5690): expected_datao=0, payload_size=512 00:26:25.568 [2024-12-06 17:42:17.397929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397939] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.568 [2024-12-06 17:42:17.397951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.568 [2024-12-06 17:42:17.397954] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397957] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xda5690): datao=0, datal=4096, cccid=7 00:26:25.568 [2024-12-06 17:42:17.397962] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe07b80) on tqpair(0xda5690): expected_datao=0, payload_size=4096 00:26:25.568 [2024-12-06 17:42:17.397966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397973] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397976] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.397992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.568 [2024-12-06 17:42:17.397997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.568 [2024-12-06 17:42:17.398001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.398005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07880) on tqpair=0xda5690 00:26:25.568 [2024-12-06 17:42:17.398017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.568 [2024-12-06 17:42:17.398023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.568 [2024-12-06 17:42:17.398026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.398030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07700) on tqpair=0xda5690 00:26:25.568 [2024-12-06 17:42:17.398040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.568 [2024-12-06 17:42:17.398046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.568 [2024-12-06 17:42:17.398050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.398054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07a00) on tqpair=0xda5690 00:26:25.568 [2024-12-06 17:42:17.398061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.568 [2024-12-06 17:42:17.398067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.568 [2024-12-06 17:42:17.398070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.568 [2024-12-06 17:42:17.398074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07b80) on tqpair=0xda5690 00:26:25.568 ===================================================== 00:26:25.568 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.568 ===================================================== 00:26:25.568 Controller Capabilities/Features 00:26:25.568 ================================ 00:26:25.568 Vendor ID: 8086 00:26:25.568 Subsystem Vendor ID: 8086 00:26:25.568 Serial Number: SPDK00000000000001 00:26:25.568 Model Number: SPDK bdev Controller 00:26:25.568 Firmware Version: 25.01 00:26:25.568 Recommended Arb Burst: 6 00:26:25.568 IEEE OUI Identifier: e4 d2 5c 00:26:25.568 Multi-path I/O 00:26:25.568 May have multiple subsystem ports: Yes 00:26:25.568 May have multiple controllers: Yes 00:26:25.568 Associated with SR-IOV VF: No 00:26:25.568 Max Data Transfer Size: 131072 00:26:25.568 Max Number of Namespaces: 32 00:26:25.568 Max Number of I/O Queues: 127 00:26:25.568 NVMe Specification Version (VS): 1.3 00:26:25.568 NVMe Specification Version (Identify): 1.3 00:26:25.568 Maximum Queue Entries: 128 00:26:25.568 Contiguous Queues Required: Yes 00:26:25.568 Arbitration Mechanisms Supported 00:26:25.568 Weighted Round Robin: Not Supported 00:26:25.568 Vendor Specific: Not Supported 00:26:25.568 Reset Timeout: 15000 ms 00:26:25.568 Doorbell Stride: 4 bytes 00:26:25.568 NVM Subsystem Reset: Not Supported 00:26:25.568 Command Sets Supported 00:26:25.568 NVM Command Set: Supported 00:26:25.568 Boot Partition: Not Supported 00:26:25.568 Memory Page Size Minimum: 4096 bytes 00:26:25.568 Memory Page Size Maximum: 4096 bytes 00:26:25.568 Persistent Memory Region: Not Supported 00:26:25.568 Optional Asynchronous Events Supported 00:26:25.568 Namespace Attribute Notices: Supported 00:26:25.568 Firmware Activation Notices: Not Supported 00:26:25.568 ANA Change Notices: Not Supported 00:26:25.568 PLE Aggregate Log Change Notices: Not Supported 00:26:25.568 LBA Status Info Alert Notices: Not Supported 00:26:25.568 EGE Aggregate Log Change Notices: Not Supported 00:26:25.568 Normal NVM Subsystem Shutdown event: Not Supported 00:26:25.568 Zone Descriptor Change Notices: Not Supported 00:26:25.568 Discovery Log Change Notices: Not Supported 00:26:25.568 Controller Attributes 00:26:25.568 128-bit Host Identifier: Supported 00:26:25.568 Non-Operational Permissive Mode: Not Supported 00:26:25.568 NVM Sets: Not Supported 00:26:25.568 Read Recovery Levels: Not Supported 00:26:25.568 Endurance Groups: Not Supported 00:26:25.568 Predictable Latency Mode: Not Supported 00:26:25.568 Traffic Based Keep ALive: Not Supported 00:26:25.568 Namespace Granularity: Not Supported 00:26:25.568 SQ Associations: Not Supported 00:26:25.568 UUID List: Not Supported 00:26:25.568 Multi-Domain Subsystem: Not Supported 00:26:25.568 Fixed Capacity Management: Not Supported 00:26:25.568 Variable Capacity Management: Not Supported 00:26:25.568 Delete Endurance Group: Not Supported 00:26:25.568 Delete NVM Set: Not Supported 00:26:25.568 Extended LBA Formats Supported: Not Supported 00:26:25.568 Flexible Data Placement Supported: Not Supported 00:26:25.568 00:26:25.568 Controller Memory Buffer Support 00:26:25.568 ================================ 00:26:25.568 Supported: No 00:26:25.568 00:26:25.568 Persistent Memory Region Support 00:26:25.568 ================================ 00:26:25.568 Supported: No 00:26:25.568 00:26:25.568 Admin Command Set Attributes 00:26:25.568 ============================ 00:26:25.568 Security Send/Receive: Not Supported 00:26:25.568 Format NVM: Not Supported 00:26:25.568 Firmware Activate/Download: Not Supported 00:26:25.568 Namespace Management: Not Supported 00:26:25.568 Device Self-Test: Not Supported 00:26:25.568 Directives: Not Supported 00:26:25.568 NVMe-MI: Not Supported 00:26:25.568 Virtualization Management: Not Supported 00:26:25.568 Doorbell Buffer Config: Not Supported 00:26:25.568 Get LBA Status Capability: Not Supported 00:26:25.568 Command & Feature Lockdown Capability: Not Supported 00:26:25.568 Abort Command Limit: 4 00:26:25.568 Async Event Request Limit: 4 00:26:25.568 Number of Firmware Slots: N/A 00:26:25.568 Firmware Slot 1 Read-Only: N/A 00:26:25.568 Firmware Activation Without Reset: N/A 00:26:25.568 Multiple Update Detection Support: N/A 00:26:25.568 Firmware Update Granularity: No Information Provided 00:26:25.568 Per-Namespace SMART Log: No 00:26:25.568 Asymmetric Namespace Access Log Page: Not Supported 00:26:25.568 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:25.568 Command Effects Log Page: Supported 00:26:25.568 Get Log Page Extended Data: Supported 00:26:25.568 Telemetry Log Pages: Not Supported 00:26:25.568 Persistent Event Log Pages: Not Supported 00:26:25.568 Supported Log Pages Log Page: May Support 00:26:25.568 Commands Supported & Effects Log Page: Not Supported 00:26:25.568 Feature Identifiers & Effects Log Page:May Support 00:26:25.568 NVMe-MI Commands & Effects Log Page: May Support 00:26:25.568 Data Area 4 for Telemetry Log: Not Supported 00:26:25.568 Error Log Page Entries Supported: 128 00:26:25.568 Keep Alive: Supported 00:26:25.568 Keep Alive Granularity: 10000 ms 00:26:25.568 00:26:25.568 NVM Command Set Attributes 00:26:25.568 ========================== 00:26:25.569 Submission Queue Entry Size 00:26:25.569 Max: 64 00:26:25.569 Min: 64 00:26:25.569 Completion Queue Entry Size 00:26:25.569 Max: 16 00:26:25.569 Min: 16 00:26:25.569 Number of Namespaces: 32 00:26:25.569 Compare Command: Supported 00:26:25.569 Write Uncorrectable Command: Not Supported 00:26:25.569 Dataset Management Command: Supported 00:26:25.569 Write Zeroes Command: Supported 00:26:25.569 Set Features Save Field: Not Supported 00:26:25.569 Reservations: Supported 00:26:25.569 Timestamp: Not Supported 00:26:25.569 Copy: Supported 00:26:25.569 Volatile Write Cache: Present 00:26:25.569 Atomic Write Unit (Normal): 1 00:26:25.569 Atomic Write Unit (PFail): 1 00:26:25.569 Atomic Compare & Write Unit: 1 00:26:25.569 Fused Compare & Write: Supported 00:26:25.569 Scatter-Gather List 00:26:25.569 SGL Command Set: Supported 00:26:25.569 SGL Keyed: Supported 00:26:25.569 SGL Bit Bucket Descriptor: Not Supported 00:26:25.569 SGL Metadata Pointer: Not Supported 00:26:25.569 Oversized SGL: Not Supported 00:26:25.569 SGL Metadata Address: Not Supported 00:26:25.569 SGL Offset: Supported 00:26:25.569 Transport SGL Data Block: Not Supported 00:26:25.569 Replay Protected Memory Block: Not Supported 00:26:25.569 00:26:25.569 Firmware Slot Information 00:26:25.569 ========================= 00:26:25.569 Active slot: 1 00:26:25.569 Slot 1 Firmware Revision: 25.01 00:26:25.569 00:26:25.569 00:26:25.569 Commands Supported and Effects 00:26:25.569 ============================== 00:26:25.569 Admin Commands 00:26:25.569 -------------- 00:26:25.569 Get Log Page (02h): Supported 00:26:25.569 Identify (06h): Supported 00:26:25.569 Abort (08h): Supported 00:26:25.569 Set Features (09h): Supported 00:26:25.569 Get Features (0Ah): Supported 00:26:25.569 Asynchronous Event Request (0Ch): Supported 00:26:25.569 Keep Alive (18h): Supported 00:26:25.569 I/O Commands 00:26:25.569 ------------ 00:26:25.569 Flush (00h): Supported LBA-Change 00:26:25.569 Write (01h): Supported LBA-Change 00:26:25.569 Read (02h): Supported 00:26:25.569 Compare (05h): Supported 00:26:25.569 Write Zeroes (08h): Supported LBA-Change 00:26:25.569 Dataset Management (09h): Supported LBA-Change 00:26:25.569 Copy (19h): Supported LBA-Change 00:26:25.569 00:26:25.569 Error Log 00:26:25.569 ========= 00:26:25.569 00:26:25.569 Arbitration 00:26:25.569 =========== 00:26:25.569 Arbitration Burst: 1 00:26:25.569 00:26:25.569 Power Management 00:26:25.569 ================ 00:26:25.569 Number of Power States: 1 00:26:25.569 Current Power State: Power State #0 00:26:25.569 Power State #0: 00:26:25.569 Max Power: 0.00 W 00:26:25.569 Non-Operational State: Operational 00:26:25.569 Entry Latency: Not Reported 00:26:25.569 Exit Latency: Not Reported 00:26:25.569 Relative Read Throughput: 0 00:26:25.569 Relative Read Latency: 0 00:26:25.569 Relative Write Throughput: 0 00:26:25.569 Relative Write Latency: 0 00:26:25.569 Idle Power: Not Reported 00:26:25.569 Active Power: Not Reported 00:26:25.569 Non-Operational Permissive Mode: Not Supported 00:26:25.569 00:26:25.569 Health Information 00:26:25.569 ================== 00:26:25.569 Critical Warnings: 00:26:25.569 Available Spare Space: OK 00:26:25.569 Temperature: OK 00:26:25.569 Device Reliability: OK 00:26:25.569 Read Only: No 00:26:25.569 Volatile Memory Backup: OK 00:26:25.569 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:25.569 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:25.569 Available Spare: 0% 00:26:25.569 Available Spare Threshold: 0% 00:26:25.569 Life Percentage Used:[2024-12-06 17:42:17.398175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.398180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xda5690) 00:26:25.569 [2024-12-06 17:42:17.398187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.569 [2024-12-06 17:42:17.398198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07b80, cid 7, qid 0 00:26:25.569 [2024-12-06 17:42:17.398422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.569 [2024-12-06 17:42:17.398429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.569 [2024-12-06 17:42:17.398432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.398436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07b80) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.398474] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:25.569 [2024-12-06 17:42:17.398484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07100) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.398491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.569 [2024-12-06 17:42:17.398499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07280) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.398503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.569 [2024-12-06 17:42:17.398508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07400) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.398513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.569 [2024-12-06 17:42:17.398518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.398523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.569 [2024-12-06 17:42:17.398531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.398535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.398539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.569 [2024-12-06 17:42:17.398545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.569 [2024-12-06 17:42:17.398557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.569 [2024-12-06 17:42:17.402649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.569 [2024-12-06 17:42:17.402657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.569 [2024-12-06 17:42:17.402661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.402665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.402672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.402676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.402680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.569 [2024-12-06 17:42:17.402686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.569 [2024-12-06 17:42:17.402703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.569 [2024-12-06 17:42:17.402913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.569 [2024-12-06 17:42:17.402919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.569 [2024-12-06 17:42:17.402923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.402927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.402932] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:25.569 [2024-12-06 17:42:17.402937] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:25.569 [2024-12-06 17:42:17.402946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.402950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.402954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.569 [2024-12-06 17:42:17.402961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.569 [2024-12-06 17:42:17.402971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.569 [2024-12-06 17:42:17.403192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.569 [2024-12-06 17:42:17.403198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.569 [2024-12-06 17:42:17.403201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.403205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.403218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.403223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.403226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.569 [2024-12-06 17:42:17.403233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.569 [2024-12-06 17:42:17.403243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.569 [2024-12-06 17:42:17.403436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.569 [2024-12-06 17:42:17.403443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.569 [2024-12-06 17:42:17.403446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.403450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.569 [2024-12-06 17:42:17.403460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.403464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.569 [2024-12-06 17:42:17.403468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.569 [2024-12-06 17:42:17.403474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.569 [2024-12-06 17:42:17.403484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.403728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.403735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.403738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.403742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.403752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.403756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.403760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.403767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.403777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.403950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.403956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.403960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.403964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.403973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.403977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.403981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.403988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.403998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.404209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.404216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.404219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.404234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.404251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.404262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.404451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.404457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.404461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.404474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.404488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.404498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.404696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.404702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.404706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.404719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.404733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.404744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.404918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.404924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.404928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.404941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.404949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.404955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.404965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.405186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.405192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.405196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.405210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.405227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.405237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.405420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.405427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.405430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.405444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.405458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.405468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.405658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.405664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.405668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.405681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.405695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.405706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.405903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.405909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.405913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.570 [2024-12-06 17:42:17.405927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.570 [2024-12-06 17:42:17.405934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.570 [2024-12-06 17:42:17.405941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.570 [2024-12-06 17:42:17.405951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.570 [2024-12-06 17:42:17.406167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.570 [2024-12-06 17:42:17.406174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.570 [2024-12-06 17:42:17.406177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.406181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.571 [2024-12-06 17:42:17.406192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.406196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.406200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.571 [2024-12-06 17:42:17.406207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.571 [2024-12-06 17:42:17.406219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.571 [2024-12-06 17:42:17.406411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.571 [2024-12-06 17:42:17.406418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.571 [2024-12-06 17:42:17.406421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.406425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.571 [2024-12-06 17:42:17.406435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.406439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.406442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.571 [2024-12-06 17:42:17.406449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.571 [2024-12-06 17:42:17.406459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.571 [2024-12-06 17:42:17.410648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.571 [2024-12-06 17:42:17.410656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.571 [2024-12-06 17:42:17.410660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.410664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.571 [2024-12-06 17:42:17.410674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.410678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.410681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xda5690) 00:26:25.571 [2024-12-06 17:42:17.410688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.571 [2024-12-06 17:42:17.410700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe07580, cid 3, qid 0 00:26:25.571 [2024-12-06 17:42:17.410887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.571 [2024-12-06 17:42:17.410894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.571 [2024-12-06 17:42:17.410897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.571 [2024-12-06 17:42:17.410901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe07580) on tqpair=0xda5690 00:26:25.571 [2024-12-06 17:42:17.410909] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:26:25.571 0% 00:26:25.571 Data Units Read: 0 00:26:25.571 Data Units Written: 0 00:26:25.571 Host Read Commands: 0 00:26:25.571 Host Write Commands: 0 00:26:25.571 Controller Busy Time: 0 minutes 00:26:25.571 Power Cycles: 0 00:26:25.571 Power On Hours: 0 hours 00:26:25.571 Unsafe Shutdowns: 0 00:26:25.571 Unrecoverable Media Errors: 0 00:26:25.571 Lifetime Error Log Entries: 0 00:26:25.571 Warning Temperature Time: 0 minutes 00:26:25.571 Critical Temperature Time: 0 minutes 00:26:25.571 00:26:25.571 Number of Queues 00:26:25.571 ================ 00:26:25.571 Number of I/O Submission Queues: 127 00:26:25.571 Number of I/O Completion Queues: 127 00:26:25.571 00:26:25.571 Active Namespaces 00:26:25.571 ================= 00:26:25.571 Namespace ID:1 00:26:25.571 Error Recovery Timeout: Unlimited 00:26:25.571 Command Set Identifier: NVM (00h) 00:26:25.571 Deallocate: Supported 00:26:25.571 Deallocated/Unwritten Error: Not Supported 00:26:25.571 Deallocated Read Value: Unknown 00:26:25.571 Deallocate in Write Zeroes: Not Supported 00:26:25.571 Deallocated Guard Field: 0xFFFF 00:26:25.571 Flush: Supported 00:26:25.571 Reservation: Supported 00:26:25.571 Namespace Sharing Capabilities: Multiple Controllers 00:26:25.571 Size (in LBAs): 131072 (0GiB) 00:26:25.571 Capacity (in LBAs): 131072 (0GiB) 00:26:25.571 Utilization (in LBAs): 131072 (0GiB) 00:26:25.571 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:25.571 EUI64: ABCDEF0123456789 00:26:25.571 UUID: a2f9dc94-fd6c-4489-966d-e5b08b1eee03 00:26:25.571 Thin Provisioning: Not Supported 00:26:25.571 Per-NS Atomic Units: Yes 00:26:25.571 Atomic Boundary Size (Normal): 0 00:26:25.571 Atomic Boundary Size (PFail): 0 00:26:25.571 Atomic Boundary Offset: 0 00:26:25.571 Maximum Single Source Range Length: 65535 00:26:25.571 Maximum Copy Length: 65535 00:26:25.571 Maximum Source Range Count: 1 00:26:25.571 NGUID/EUI64 Never Reused: No 00:26:25.571 Namespace Write Protected: No 00:26:25.571 Number of LBA Formats: 1 00:26:25.571 Current LBA Format: LBA Format #00 00:26:25.571 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:25.571 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.571 rmmod nvme_tcp 00:26:25.571 rmmod nvme_fabrics 00:26:25.571 rmmod nvme_keyring 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1698679 ']' 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1698679 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1698679 ']' 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1698679 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698679 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698679' 00:26:25.571 killing process with pid 1698679 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1698679 00:26:25.571 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1698679 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.833 17:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:28.381 00:26:28.381 real 0m11.694s 00:26:28.381 user 0m8.725s 00:26:28.381 sys 0m6.178s 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 ************************************ 00:26:28.381 END TEST nvmf_identify 00:26:28.381 ************************************ 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.381 ************************************ 00:26:28.381 START TEST nvmf_perf 00:26:28.381 ************************************ 00:26:28.381 17:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:28.381 * Looking for test storage... 00:26:28.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.381 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.381 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.382 --rc genhtml_branch_coverage=1 00:26:28.382 --rc genhtml_function_coverage=1 00:26:28.382 --rc genhtml_legend=1 00:26:28.382 --rc geninfo_all_blocks=1 00:26:28.382 --rc geninfo_unexecuted_blocks=1 00:26:28.382 00:26:28.382 ' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.382 --rc genhtml_branch_coverage=1 00:26:28.382 --rc genhtml_function_coverage=1 00:26:28.382 --rc genhtml_legend=1 00:26:28.382 --rc geninfo_all_blocks=1 00:26:28.382 --rc geninfo_unexecuted_blocks=1 00:26:28.382 00:26:28.382 ' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.382 --rc genhtml_branch_coverage=1 00:26:28.382 --rc genhtml_function_coverage=1 00:26:28.382 --rc genhtml_legend=1 00:26:28.382 --rc geninfo_all_blocks=1 00:26:28.382 --rc geninfo_unexecuted_blocks=1 00:26:28.382 00:26:28.382 ' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.382 --rc genhtml_branch_coverage=1 00:26:28.382 --rc genhtml_function_coverage=1 00:26:28.382 --rc genhtml_legend=1 00:26:28.382 --rc geninfo_all_blocks=1 00:26:28.382 --rc geninfo_unexecuted_blocks=1 00:26:28.382 00:26:28.382 ' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.382 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.383 17:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:36.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:36.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.522 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:36.523 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:36.523 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:36.523 00:26:36.523 --- 10.0.0.2 ping statistics --- 00:26:36.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.523 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:26:36.523 00:26:36.523 --- 10.0.0.1 ping statistics --- 00:26:36.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.523 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1701167 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1701167 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1701167 ']' 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.523 17:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:36.523 [2024-12-06 17:42:27.622472] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:26:36.523 [2024-12-06 17:42:27.622539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.523 [2024-12-06 17:42:27.719538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.523 [2024-12-06 17:42:27.772604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.523 [2024-12-06 17:42:27.772666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.523 [2024-12-06 17:42:27.772674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.523 [2024-12-06 17:42:27.772681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.523 [2024-12-06 17:42:27.772687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.523 [2024-12-06 17:42:27.774689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.523 [2024-12-06 17:42:27.774801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.523 [2024-12-06 17:42:27.775117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.523 [2024-12-06 17:42:27.775121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:36.523 17:42:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:37.094 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:37.094 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:37.355 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:37.355 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:37.615 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:37.615 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:37.615 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:37.615 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:37.615 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:37.615 [2024-12-06 17:42:29.592401] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.615 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.876 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:37.876 17:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:38.136 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:38.136 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:38.396 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.396 [2024-12-06 17:42:30.388122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.396 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:38.654 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:38.654 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:38.654 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:38.655 17:42:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:40.035 Initializing NVMe Controllers 00:26:40.035 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:40.035 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:40.035 Initialization complete. Launching workers. 00:26:40.035 ======================================================== 00:26:40.035 Latency(us) 00:26:40.035 Device Information : IOPS MiB/s Average min max 00:26:40.035 PCIE (0000:65:00.0) NSID 1 from core 0: 78060.98 304.93 409.23 13.33 5277.15 00:26:40.035 ======================================================== 00:26:40.035 Total : 78060.98 304.93 409.23 13.33 5277.15 00:26:40.035 00:26:40.035 17:42:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.420 Initializing NVMe Controllers 00:26:41.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:41.420 Initialization complete. Launching workers. 00:26:41.420 ======================================================== 00:26:41.420 Latency(us) 00:26:41.420 Device Information : IOPS MiB/s Average min max 00:26:41.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.93 0.46 8825.02 193.06 45599.00 00:26:41.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.96 0.27 15079.83 7959.75 51878.92 00:26:41.420 ======================================================== 00:26:41.420 Total : 185.88 0.73 11145.35 193.06 51878.92 00:26:41.420 00:26:41.420 17:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.803 Initializing NVMe Controllers 00:26:42.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:42.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:42.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:42.804 Initialization complete. Launching workers. 00:26:42.804 ======================================================== 00:26:42.804 Latency(us) 00:26:42.804 Device Information : IOPS MiB/s Average min max 00:26:42.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11821.00 46.18 2713.80 412.86 9502.14 00:26:42.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3673.00 14.35 8752.08 7251.95 16460.12 00:26:42.804 ======================================================== 00:26:42.804 Total : 15494.00 60.52 4145.23 412.86 16460.12 00:26:42.804 00:26:42.804 17:42:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:42.804 17:42:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:42.804 17:42:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:45.343 Initializing NVMe Controllers 00:26:45.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:45.343 Controller IO queue size 128, less than required. 00:26:45.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:45.343 Controller IO queue size 128, less than required. 00:26:45.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:45.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:45.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:45.343 Initialization complete. Launching workers. 00:26:45.343 ======================================================== 00:26:45.343 Latency(us) 00:26:45.343 Device Information : IOPS MiB/s Average min max 00:26:45.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1840.26 460.06 70957.78 41902.46 120857.18 00:26:45.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.77 149.94 217716.44 48421.44 328193.51 00:26:45.343 ======================================================== 00:26:45.343 Total : 2440.02 610.01 107031.70 41902.46 328193.51 00:26:45.343 00:26:45.343 17:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:45.343 No valid NVMe controllers or AIO or URING devices found 00:26:45.343 Initializing NVMe Controllers 00:26:45.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:45.343 Controller IO queue size 128, less than required. 00:26:45.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:45.343 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:45.343 Controller IO queue size 128, less than required. 00:26:45.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:45.343 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:45.343 WARNING: Some requested NVMe devices were skipped 00:26:45.343 17:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:47.879 Initializing NVMe Controllers 00:26:47.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.879 Controller IO queue size 128, less than required. 00:26:47.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.879 Controller IO queue size 128, less than required. 00:26:47.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:47.879 Initialization complete. Launching workers. 00:26:47.879 00:26:47.879 ==================== 00:26:47.879 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:47.879 TCP transport: 00:26:47.879 polls: 32227 00:26:47.879 idle_polls: 12776 00:26:47.879 sock_completions: 19451 00:26:47.879 nvme_completions: 7027 00:26:47.879 submitted_requests: 10482 00:26:47.879 queued_requests: 1 00:26:47.879 00:26:47.879 ==================== 00:26:47.879 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:47.879 TCP transport: 00:26:47.879 polls: 55320 00:26:47.879 idle_polls: 35883 00:26:47.879 sock_completions: 19437 00:26:47.879 nvme_completions: 7551 00:26:47.879 submitted_requests: 11430 00:26:47.879 queued_requests: 1 00:26:47.879 ======================================================== 00:26:47.879 Latency(us) 00:26:47.879 Device Information : IOPS MiB/s Average min max 00:26:47.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1756.08 439.02 74301.41 41233.02 118418.90 00:26:47.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1887.05 471.76 68352.70 32590.69 121144.11 00:26:47.879 ======================================================== 00:26:47.879 Total : 3643.13 910.78 71220.13 32590.69 121144.11 00:26:47.879 00:26:47.879 17:42:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:47.879 17:42:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.139 rmmod nvme_tcp 00:26:48.139 rmmod nvme_fabrics 00:26:48.139 rmmod nvme_keyring 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1701167 ']' 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1701167 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1701167 ']' 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1701167 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.139 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701167 00:26:48.398 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:48.398 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:48.398 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701167' 00:26:48.398 killing process with pid 1701167 00:26:48.398 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1701167 00:26:48.398 17:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1701167 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.304 17:42:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.869 00:26:52.869 real 0m24.352s 00:26:52.869 user 0m59.223s 00:26:52.869 sys 0m8.586s 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:52.869 ************************************ 00:26:52.869 END TEST nvmf_perf 00:26:52.869 ************************************ 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.869 ************************************ 00:26:52.869 START TEST nvmf_fio_host 00:26:52.869 ************************************ 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:52.869 * Looking for test storage... 00:26:52.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:52.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.869 --rc genhtml_branch_coverage=1 00:26:52.869 --rc genhtml_function_coverage=1 00:26:52.869 --rc genhtml_legend=1 00:26:52.869 --rc geninfo_all_blocks=1 00:26:52.869 --rc geninfo_unexecuted_blocks=1 00:26:52.869 00:26:52.869 ' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:52.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.869 --rc genhtml_branch_coverage=1 00:26:52.869 --rc genhtml_function_coverage=1 00:26:52.869 --rc genhtml_legend=1 00:26:52.869 --rc geninfo_all_blocks=1 00:26:52.869 --rc geninfo_unexecuted_blocks=1 00:26:52.869 00:26:52.869 ' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:52.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.869 --rc genhtml_branch_coverage=1 00:26:52.869 --rc genhtml_function_coverage=1 00:26:52.869 --rc genhtml_legend=1 00:26:52.869 --rc geninfo_all_blocks=1 00:26:52.869 --rc geninfo_unexecuted_blocks=1 00:26:52.869 00:26:52.869 ' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:52.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.869 --rc genhtml_branch_coverage=1 00:26:52.869 --rc genhtml_function_coverage=1 00:26:52.869 --rc genhtml_legend=1 00:26:52.869 --rc geninfo_all_blocks=1 00:26:52.869 --rc geninfo_unexecuted_blocks=1 00:26:52.869 00:26:52.869 ' 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.869 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:52.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:52.870 17:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:01.027 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:01.027 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:01.027 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:01.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:01.027 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:27:01.028 00:27:01.028 --- 10.0.0.2 ping statistics --- 00:27:01.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.028 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:27:01.028 00:27:01.028 --- 10.0.0.1 ping statistics --- 00:27:01.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.028 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.028 17:42:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1703846 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1703846 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1703846 ']' 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.028 [2024-12-06 17:42:52.060632] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:27:01.028 [2024-12-06 17:42:52.060712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.028 [2024-12-06 17:42:52.158892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.028 [2024-12-06 17:42:52.211147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.028 [2024-12-06 17:42:52.211202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.028 [2024-12-06 17:42:52.211210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.028 [2024-12-06 17:42:52.211217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.028 [2024-12-06 17:42:52.211224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.028 [2024-12-06 17:42:52.213235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.028 [2024-12-06 17:42:52.213397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.028 [2024-12-06 17:42:52.213563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.028 [2024-12-06 17:42:52.213564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:01.028 17:42:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:01.028 [2024-12-06 17:42:53.062585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.290 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:01.290 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.290 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.290 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:01.290 Malloc1 00:27:01.550 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.550 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:01.811 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.071 [2024-12-06 17:42:53.920760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.071 17:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:02.071 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:02.375 17:42:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:02.638 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:02.638 fio-3.35 00:27:02.638 Starting 1 thread 00:27:05.195 00:27:05.195 test: (groupid=0, jobs=1): err= 0: pid=1704068: Fri Dec 6 17:42:56 2024 00:27:05.195 read: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(82.6MiB/2004msec) 00:27:05.195 slat (usec): min=2, max=277, avg= 2.18, stdev= 2.70 00:27:05.195 clat (usec): min=3697, max=9852, avg=6709.87, stdev=1122.94 00:27:05.195 lat (usec): min=3731, max=9858, avg=6712.06, stdev=1122.95 00:27:05.195 clat percentiles (usec): 00:27:05.195 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5211], 00:27:05.195 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 7308], 00:27:05.195 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8029], 00:27:05.195 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[ 9372], 00:27:05.195 | 99.99th=[ 9634] 00:27:05.195 bw ( KiB/s): min=37936, max=52912, per=99.87%, avg=42172.00, stdev=7175.12, samples=4 00:27:05.195 iops : min= 9484, max=13228, avg=10543.00, stdev=1793.78, samples=4 00:27:05.195 write: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(82.6MiB/2004msec); 0 zone resets 00:27:05.195 slat (usec): min=2, max=270, avg= 2.26, stdev= 2.05 00:27:05.195 clat (usec): min=2885, max=8189, avg=5387.20, stdev=898.45 00:27:05.195 lat (usec): min=2903, max=8248, avg=5389.46, stdev=898.51 00:27:05.195 clat percentiles (usec): 00:27:05.195 | 1.00th=[ 3621], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4228], 00:27:05.195 | 30.00th=[ 5080], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5866], 00:27:05.195 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:27:05.195 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 7898], 00:27:05.195 | 99.99th=[ 8094] 00:27:05.195 bw ( KiB/s): min=38400, max=53312, per=99.93%, avg=42178.00, stdev=7422.87, samples=4 00:27:05.195 iops : min= 9600, max=13328, avg=10544.50, stdev=1855.72, samples=4 00:27:05.195 lat (msec) : 4=5.18%, 10=94.82% 00:27:05.195 cpu : usr=72.64%, sys=26.36%, ctx=31, majf=0, minf=16 00:27:05.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:05.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:05.195 issued rwts: total=21155,21147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:05.195 00:27:05.195 Run status group 0 (all jobs): 00:27:05.195 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=82.6MiB (86.7MB), run=2004-2004msec 00:27:05.195 WRITE: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=82.6MiB (86.6MB), run=2004-2004msec 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:05.195 17:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:05.454 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:05.454 fio-3.35 00:27:05.454 Starting 1 thread 00:27:08.017 00:27:08.017 test: (groupid=0, jobs=1): err= 0: pid=1704266: Fri Dec 6 17:42:59 2024 00:27:08.017 read: IOPS=9237, BW=144MiB/s (151MB/s)(289MiB/2004msec) 00:27:08.017 slat (usec): min=3, max=113, avg= 3.61, stdev= 1.59 00:27:08.017 clat (usec): min=1916, max=52198, avg=8560.76, stdev=3842.49 00:27:08.017 lat (usec): min=1919, max=52202, avg=8564.36, stdev=3842.53 00:27:08.017 clat percentiles (usec): 00:27:08.017 | 1.00th=[ 4424], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:27:08.017 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 8717], 00:27:08.017 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[11469], 00:27:08.017 | 99.00th=[13960], 99.50th=[46400], 99.90th=[51119], 99.95th=[51643], 00:27:08.017 | 99.99th=[52167] 00:27:08.017 bw ( KiB/s): min=64032, max=86880, per=49.26%, avg=72808.00, stdev=10096.81, samples=4 00:27:08.017 iops : min= 4002, max= 5430, avg=4550.50, stdev=631.05, samples=4 00:27:08.017 write: IOPS=5618, BW=87.8MiB/s (92.1MB/s)(149MiB/1694msec); 0 zone resets 00:27:08.017 slat (usec): min=39, max=325, avg=40.83, stdev= 6.58 00:27:08.017 clat (usec): min=2199, max=15313, avg=9053.12, stdev=1372.24 00:27:08.017 lat (usec): min=2239, max=15353, avg=9093.95, stdev=1373.51 00:27:08.017 clat percentiles (usec): 00:27:08.017 | 1.00th=[ 5669], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:27:08.017 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:27:08.017 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:27:08.017 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13960], 99.95th=[13960], 00:27:08.017 | 99.99th=[15270] 00:27:08.017 bw ( KiB/s): min=66432, max=90432, per=84.09%, avg=75592.00, stdev=10605.17, samples=4 00:27:08.017 iops : min= 4152, max= 5652, avg=4724.50, stdev=662.82, samples=4 00:27:08.017 lat (msec) : 2=0.01%, 4=0.34%, 10=76.43%, 20=22.77%, 50=0.32% 00:27:08.017 lat (msec) : 100=0.14% 00:27:08.017 cpu : usr=84.82%, sys=13.73%, ctx=17, majf=0, minf=30 00:27:08.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:08.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.017 issued rwts: total=18511,9518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.017 00:27:08.017 Run status group 0 (all jobs): 00:27:08.017 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (303MB), run=2004-2004msec 00:27:08.017 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=149MiB (156MB), run=1694-1694msec 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.017 rmmod nvme_tcp 00:27:08.017 rmmod nvme_fabrics 00:27:08.017 rmmod nvme_keyring 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1703846 ']' 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1703846 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1703846 ']' 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1703846 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1703846 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.017 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1703846' 00:27:08.018 killing process with pid 1703846 00:27:08.018 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1703846 00:27:08.018 17:42:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1703846 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.018 17:43:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.560 00:27:10.560 real 0m17.779s 00:27:10.560 user 1m10.256s 00:27:10.560 sys 0m7.628s 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.560 ************************************ 00:27:10.560 END TEST nvmf_fio_host 00:27:10.560 ************************************ 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.560 ************************************ 00:27:10.560 START TEST nvmf_failover 00:27:10.560 ************************************ 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:10.560 * Looking for test storage... 00:27:10.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:10.560 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:10.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.561 --rc genhtml_branch_coverage=1 00:27:10.561 --rc genhtml_function_coverage=1 00:27:10.561 --rc genhtml_legend=1 00:27:10.561 --rc geninfo_all_blocks=1 00:27:10.561 --rc geninfo_unexecuted_blocks=1 00:27:10.561 00:27:10.561 ' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:10.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.561 --rc genhtml_branch_coverage=1 00:27:10.561 --rc genhtml_function_coverage=1 00:27:10.561 --rc genhtml_legend=1 00:27:10.561 --rc geninfo_all_blocks=1 00:27:10.561 --rc geninfo_unexecuted_blocks=1 00:27:10.561 00:27:10.561 ' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:10.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.561 --rc genhtml_branch_coverage=1 00:27:10.561 --rc genhtml_function_coverage=1 00:27:10.561 --rc genhtml_legend=1 00:27:10.561 --rc geninfo_all_blocks=1 00:27:10.561 --rc geninfo_unexecuted_blocks=1 00:27:10.561 00:27:10.561 ' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:10.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.561 --rc genhtml_branch_coverage=1 00:27:10.561 --rc genhtml_function_coverage=1 00:27:10.561 --rc genhtml_legend=1 00:27:10.561 --rc geninfo_all_blocks=1 00:27:10.561 --rc geninfo_unexecuted_blocks=1 00:27:10.561 00:27:10.561 ' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.561 17:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.700 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:18.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:18.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:18.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:18.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.701 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:27:18.702 00:27:18.702 --- 10.0.0.2 ping statistics --- 00:27:18.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.702 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:27:18.702 00:27:18.702 --- 10.0.0.1 ping statistics --- 00:27:18.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.702 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1706736 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1706736 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1706736 ']' 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.702 17:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:18.702 [2024-12-06 17:43:09.783714] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:27:18.702 [2024-12-06 17:43:09.783780] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.702 [2024-12-06 17:43:09.881868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:18.702 [2024-12-06 17:43:09.916405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.702 [2024-12-06 17:43:09.916437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.702 [2024-12-06 17:43:09.916445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.702 [2024-12-06 17:43:09.916452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.702 [2024-12-06 17:43:09.916458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.702 [2024-12-06 17:43:09.917936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.702 [2024-12-06 17:43:09.917953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.702 [2024-12-06 17:43:09.917957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.702 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:18.702 [2024-12-06 17:43:10.760452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.963 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:18.963 Malloc0 00:27:18.963 17:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:19.223 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:19.484 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.484 [2024-12-06 17:43:11.494056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.484 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:19.744 [2024-12-06 17:43:11.674453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:19.744 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:20.005 [2024-12-06 17:43:11.859004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1706792 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1706792 /var/tmp/bdevperf.sock 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1706792 ']' 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.005 17:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:20.946 17:43:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.946 17:43:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:20.946 17:43:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:20.946 NVMe0n1 00:27:20.946 17:43:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:21.516 00:27:21.516 17:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1706809 00:27:21.516 17:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:21.516 17:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:22.457 17:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.457 [2024-12-06 17:43:14.509544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.457 [2024-12-06 17:43:14.509793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.458 [2024-12-06 17:43:14.509825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa1ed0 is same with the state(6) to be set 00:27:22.718 17:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:26.016 17:43:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:26.016 00:27:26.016 17:43:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:26.016 [2024-12-06 17:43:17.974046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.016 [2024-12-06 17:43:17.974282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 [2024-12-06 17:43:17.974536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa2980 is same with the state(6) to be set 00:27:26.017 17:43:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:29.384 17:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.384 [2024-12-06 17:43:21.166448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.385 17:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:30.326 17:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:30.326 [2024-12-06 17:43:22.357752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e68140 is same with the state(6) to be set 00:27:30.326 [2024-12-06 17:43:22.357790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e68140 is same with the state(6) to be set 00:27:30.326 [2024-12-06 17:43:22.357797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e68140 is same with the state(6) to be set 00:27:30.326 [2024-12-06 17:43:22.357801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e68140 is same with the state(6) to be set 00:27:30.326 [2024-12-06 17:43:22.357806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e68140 is same with the state(6) to be set 00:27:30.326 [2024-12-06 17:43:22.357811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e68140 is same with the state(6) to be set 00:27:30.326 17:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1706809 00:27:36.941 { 00:27:36.941 "results": [ 00:27:36.941 { 00:27:36.941 "job": "NVMe0n1", 00:27:36.941 "core_mask": "0x1", 00:27:36.941 "workload": "verify", 00:27:36.941 "status": "finished", 00:27:36.941 "verify_range": { 00:27:36.941 "start": 0, 00:27:36.941 "length": 16384 00:27:36.941 }, 00:27:36.941 "queue_depth": 128, 00:27:36.941 "io_size": 4096, 00:27:36.941 "runtime": 15.004726, 00:27:36.941 "iops": 12025.611130786394, 00:27:36.941 "mibps": 46.97504347963435, 00:27:36.941 "io_failed": 20724, 00:27:36.941 "io_timeout": 0, 00:27:36.941 "avg_latency_us": 9525.204435098882, 00:27:36.941 "min_latency_us": 525.6533333333333, 00:27:36.941 "max_latency_us": 20316.16 00:27:36.941 } 00:27:36.941 ], 00:27:36.941 "core_count": 1 00:27:36.941 } 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1706792 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1706792 ']' 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1706792 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706792 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706792' 00:27:36.941 killing process with pid 1706792 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1706792 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1706792 00:27:36.941 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.941 [2024-12-06 17:43:11.939768] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:27:36.941 [2024-12-06 17:43:11.939828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706792 ] 00:27:36.941 [2024-12-06 17:43:12.026767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.941 [2024-12-06 17:43:12.062337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.941 Running I/O for 15 seconds... 00:27:36.941 10680.00 IOPS, 41.72 MiB/s [2024-12-06T16:43:29.007Z] [2024-12-06 17:43:14.510382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.941 [2024-12-06 17:43:14.510841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.941 [2024-12-06 17:43:14.510851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.510985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.510993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.942 [2024-12-06 17:43:14.511511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.942 [2024-12-06 17:43:14.511520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.511986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.511993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.512010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.512026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.943 [2024-12-06 17:43:14.512043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.943 [2024-12-06 17:43:14.512178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.943 [2024-12-06 17:43:14.512187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.944 [2024-12-06 17:43:14.512563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.944 [2024-12-06 17:43:14.512591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.944 [2024-12-06 17:43:14.512598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:27:36.944 [2024-12-06 17:43:14.512610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512654] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:36.944 [2024-12-06 17:43:14.512676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.944 [2024-12-06 17:43:14.512684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.944 [2024-12-06 17:43:14.512700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.944 [2024-12-06 17:43:14.512716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.944 [2024-12-06 17:43:14.512731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:14.512739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:36.944 [2024-12-06 17:43:14.512778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212f9d0 (9): Bad file descriptor 00:27:36.944 [2024-12-06 17:43:14.516347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:36.944 [2024-12-06 17:43:14.584909] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:36.944 10414.50 IOPS, 40.68 MiB/s [2024-12-06T16:43:29.010Z] 11028.33 IOPS, 43.08 MiB/s [2024-12-06T16:43:29.010Z] 11363.00 IOPS, 44.39 MiB/s [2024-12-06T16:43:29.010Z] [2024-12-06 17:43:17.976476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.944 [2024-12-06 17:43:17.976606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.944 [2024-12-06 17:43:17.976611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.945 [2024-12-06 17:43:17.976913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.976991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.976996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.977002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.977008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.977014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.945 [2024-12-06 17:43:17.977019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.945 [2024-12-06 17:43:17.977026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.946 [2024-12-06 17:43:17.977479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.946 [2024-12-06 17:43:17.977484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.947 [2024-12-06 17:43:17.977621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:27:36.947 [2024-12-06 17:43:17.977936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.947 [2024-12-06 17:43:17.977942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.947 [2024-12-06 17:43:17.977947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.947 [2024-12-06 17:43:17.977951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45424 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.977956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.977963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.977967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.977971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45432 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.977977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.977982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.977986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.977991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45440 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.977995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.978001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.978004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.978009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45448 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.978014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.978019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.978023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.978030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.978035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.978040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.978044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.978048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.978053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.978059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.978063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45504 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45520 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45528 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.990965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.948 [2024-12-06 17:43:17.990969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.948 [2024-12-06 17:43:17.990974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45552 len:8 PRP1 0x0 PRP2 0x0 00:27:36.948 [2024-12-06 17:43:17.990979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.991015] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:36.948 [2024-12-06 17:43:17.991038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.948 [2024-12-06 17:43:17.991045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.991051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.948 [2024-12-06 17:43:17.991057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.991062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.948 [2024-12-06 17:43:17.991067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.991073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.948 [2024-12-06 17:43:17.991078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:17.991085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:36.948 [2024-12-06 17:43:17.991118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212f9d0 (9): Bad file descriptor 00:27:36.948 [2024-12-06 17:43:17.993964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:36.948 [2024-12-06 17:43:18.143704] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:36.948 11134.20 IOPS, 43.49 MiB/s [2024-12-06T16:43:29.014Z] 11332.17 IOPS, 44.27 MiB/s [2024-12-06T16:43:29.014Z] 11507.29 IOPS, 44.95 MiB/s [2024-12-06T16:43:29.014Z] 11640.62 IOPS, 45.47 MiB/s [2024-12-06T16:43:29.014Z] [2024-12-06 17:43:22.360240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.948 [2024-12-06 17:43:22.360270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:22.360282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.948 [2024-12-06 17:43:22.360290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:22.360297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.948 [2024-12-06 17:43:22.360303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:22.360310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.948 [2024-12-06 17:43:22.360315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.948 [2024-12-06 17:43:22.360322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.948 [2024-12-06 17:43:22.360329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.949 [2024-12-06 17:43:22.360341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.949 [2024-12-06 17:43:22.360353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.949 [2024-12-06 17:43:22.360366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.949 [2024-12-06 17:43:22.360807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.949 [2024-12-06 17:43:22.360812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.360994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.360999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.950 [2024-12-06 17:43:22.361236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.950 [2024-12-06 17:43:22.361243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.951 [2024-12-06 17:43:22.361476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6264 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6280 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6288 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6296 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6312 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6320 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6328 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.951 [2024-12-06 17:43:22.361693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6344 len:8 PRP1 0x0 PRP2 0x0 00:27:36.951 [2024-12-06 17:43:22.361698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.951 [2024-12-06 17:43:22.361704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.951 [2024-12-06 17:43:22.361708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6352 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6360 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6376 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6384 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6392 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6408 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6416 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6424 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6440 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6448 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.361962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6456 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.361967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.361972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.361976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.372518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.372545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.372565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.372571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6472 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.372578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:36.952 [2024-12-06 17:43:22.372591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:36.952 [2024-12-06 17:43:22.372597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6480 len:8 PRP1 0x0 PRP2 0x0 00:27:36.952 [2024-12-06 17:43:22.372604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372654] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:36.952 [2024-12-06 17:43:22.372683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.952 [2024-12-06 17:43:22.372692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.952 [2024-12-06 17:43:22.372708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.952 [2024-12-06 17:43:22.372723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.952 [2024-12-06 17:43:22.372739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.952 [2024-12-06 17:43:22.372751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:36.952 [2024-12-06 17:43:22.372778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212f9d0 (9): Bad file descriptor 00:27:36.952 [2024-12-06 17:43:22.376037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:36.952 11605.00 IOPS, 45.33 MiB/s [2024-12-06T16:43:29.018Z] [2024-12-06 17:43:22.558842] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:36.952 11612.80 IOPS, 45.36 MiB/s [2024-12-06T16:43:29.018Z] 11718.00 IOPS, 45.77 MiB/s [2024-12-06T16:43:29.018Z] 11826.92 IOPS, 46.20 MiB/s [2024-12-06T16:43:29.018Z] 11917.54 IOPS, 46.55 MiB/s [2024-12-06T16:43:29.018Z] 11987.64 IOPS, 46.83 MiB/s 00:27:36.952 Latency(us) 00:27:36.952 [2024-12-06T16:43:29.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.952 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.952 Verification LBA range: start 0x0 length 0x4000 00:27:36.952 NVMe0n1 : 15.00 12025.61 46.98 1381.16 0.00 9525.20 525.65 20316.16 00:27:36.952 [2024-12-06T16:43:29.018Z] =================================================================================================================== 00:27:36.952 [2024-12-06T16:43:29.018Z] Total : 12025.61 46.98 1381.16 0.00 9525.20 525.65 20316.16 00:27:36.952 Received shutdown signal, test time was about 15.000000 seconds 00:27:36.952 00:27:36.952 Latency(us) 00:27:36.952 [2024-12-06T16:43:29.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.952 [2024-12-06T16:43:29.018Z] =================================================================================================================== 00:27:36.952 [2024-12-06T16:43:29.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1707011 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1707011 /var/tmp/bdevperf.sock 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1707011 ']' 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.952 17:43:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:37.522 17:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.522 17:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:37.522 17:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.783 [2024-12-06 17:43:29.683201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.783 17:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:38.044 [2024-12-06 17:43:29.859660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:38.044 17:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:38.305 NVMe0n1 00:27:38.305 17:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:38.305 00:27:38.565 17:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:38.825 00:27:38.825 17:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:38.825 17:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:39.085 17:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:39.345 17:43:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:42.645 17:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:42.645 17:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:42.645 17:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:42.645 17:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1707093 00:27:42.645 17:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1707093 00:27:43.587 { 00:27:43.587 "results": [ 00:27:43.587 { 00:27:43.587 "job": "NVMe0n1", 00:27:43.587 "core_mask": "0x1", 00:27:43.587 "workload": "verify", 00:27:43.587 "status": "finished", 00:27:43.587 "verify_range": { 00:27:43.587 "start": 0, 00:27:43.587 "length": 16384 00:27:43.587 }, 00:27:43.587 "queue_depth": 128, 00:27:43.587 "io_size": 4096, 00:27:43.587 "runtime": 1.011098, 00:27:43.587 "iops": 12942.36562627955, 00:27:43.587 "mibps": 50.55611572765449, 00:27:43.587 "io_failed": 0, 00:27:43.587 "io_timeout": 0, 00:27:43.587 "avg_latency_us": 9854.506699271486, 00:27:43.587 "min_latency_us": 1979.7333333333333, 00:27:43.587 "max_latency_us": 13707.946666666667 00:27:43.587 } 00:27:43.588 ], 00:27:43.588 "core_count": 1 00:27:43.588 } 00:27:43.588 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:43.588 [2024-12-06 17:43:28.731855] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:27:43.588 [2024-12-06 17:43:28.731913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707011 ] 00:27:43.588 [2024-12-06 17:43:28.817108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.588 [2024-12-06 17:43:28.846488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.588 [2024-12-06 17:43:31.121212] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:43.588 [2024-12-06 17:43:31.121257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.588 [2024-12-06 17:43:31.121266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.588 [2024-12-06 17:43:31.121272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.588 [2024-12-06 17:43:31.121278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.588 [2024-12-06 17:43:31.121284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.588 [2024-12-06 17:43:31.121289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.588 [2024-12-06 17:43:31.121294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.588 [2024-12-06 17:43:31.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.588 [2024-12-06 17:43:31.121305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:43.588 [2024-12-06 17:43:31.121326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:43.588 [2024-12-06 17:43:31.121338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23799d0 (9): Bad file descriptor 00:27:43.588 [2024-12-06 17:43:31.134343] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:43.588 Running I/O for 1 seconds... 00:27:43.588 12871.00 IOPS, 50.28 MiB/s 00:27:43.588 Latency(us) 00:27:43.588 [2024-12-06T16:43:35.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.588 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:43.588 Verification LBA range: start 0x0 length 0x4000 00:27:43.588 NVMe0n1 : 1.01 12942.37 50.56 0.00 0.00 9854.51 1979.73 13707.95 00:27:43.588 [2024-12-06T16:43:35.654Z] =================================================================================================================== 00:27:43.588 [2024-12-06T16:43:35.654Z] Total : 12942.37 50.56 0.00 0.00 9854.51 1979.73 13707.95 00:27:43.588 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:43.588 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:43.588 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:43.848 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:43.848 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:44.109 17:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:44.369 17:43:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1707011 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1707011 ']' 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1707011 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707011 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707011' 00:27:47.667 killing process with pid 1707011 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1707011 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1707011 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:47.667 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.927 rmmod nvme_tcp 00:27:47.927 rmmod nvme_fabrics 00:27:47.927 rmmod nvme_keyring 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1706736 ']' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1706736 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1706736 ']' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1706736 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706736 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706736' 00:27:47.927 killing process with pid 1706736 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1706736 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1706736 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.927 17:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.467 00:27:50.467 real 0m39.835s 00:27:50.467 user 2m3.036s 00:27:50.467 sys 0m8.490s 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:50.467 ************************************ 00:27:50.467 END TEST nvmf_failover 00:27:50.467 ************************************ 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.467 ************************************ 00:27:50.467 START TEST nvmf_host_discovery 00:27:50.467 ************************************ 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:50.467 * Looking for test storage... 00:27:50.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.467 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:50.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.467 --rc genhtml_branch_coverage=1 00:27:50.467 --rc genhtml_function_coverage=1 00:27:50.467 --rc genhtml_legend=1 00:27:50.467 --rc geninfo_all_blocks=1 00:27:50.467 --rc geninfo_unexecuted_blocks=1 00:27:50.467 00:27:50.467 ' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.468 --rc genhtml_branch_coverage=1 00:27:50.468 --rc genhtml_function_coverage=1 00:27:50.468 --rc genhtml_legend=1 00:27:50.468 --rc geninfo_all_blocks=1 00:27:50.468 --rc geninfo_unexecuted_blocks=1 00:27:50.468 00:27:50.468 ' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.468 --rc genhtml_branch_coverage=1 00:27:50.468 --rc genhtml_function_coverage=1 00:27:50.468 --rc genhtml_legend=1 00:27:50.468 --rc geninfo_all_blocks=1 00:27:50.468 --rc geninfo_unexecuted_blocks=1 00:27:50.468 00:27:50.468 ' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.468 --rc genhtml_branch_coverage=1 00:27:50.468 --rc genhtml_function_coverage=1 00:27:50.468 --rc genhtml_legend=1 00:27:50.468 --rc geninfo_all_blocks=1 00:27:50.468 --rc geninfo_unexecuted_blocks=1 00:27:50.468 00:27:50.468 ' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.468 17:43:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:58.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:58.608 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:58.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:58.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.608 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:27:58.609 00:27:58.609 --- 10.0.0.2 ping statistics --- 00:27:58.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.609 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:27:58.609 00:27:58.609 --- 10.0.0.1 ping statistics --- 00:27:58.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.609 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1709611 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1709611 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1709611 ']' 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.609 17:43:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.609 [2024-12-06 17:43:49.819831] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:27:58.609 [2024-12-06 17:43:49.819922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.609 [2024-12-06 17:43:49.918368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.609 [2024-12-06 17:43:49.967575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.609 [2024-12-06 17:43:49.967623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.609 [2024-12-06 17:43:49.967631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.609 [2024-12-06 17:43:49.967645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.609 [2024-12-06 17:43:49.967652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.609 [2024-12-06 17:43:49.968367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.609 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.609 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:58.609 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.609 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.609 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 [2024-12-06 17:43:50.694456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 [2024-12-06 17:43:50.706693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 null0 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 null1 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1709645 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1709645 /tmp/host.sock 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1709645 ']' 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:58.871 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.871 17:43:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.871 [2024-12-06 17:43:50.813674] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:27:58.871 [2024-12-06 17:43:50.813750] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709645 ] 00:27:58.871 [2024-12-06 17:43:50.905083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.131 [2024-12-06 17:43:50.958108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:59.704 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.965 [2024-12-06 17:43:51.949915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:59.965 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.966 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:59.966 17:43:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:59.966 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:00.226 17:43:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:00.811 [2024-12-06 17:43:52.685861] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:00.811 [2024-12-06 17:43:52.685891] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:00.811 [2024-12-06 17:43:52.685906] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:00.811 [2024-12-06 17:43:52.774175] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:01.071 [2024-12-06 17:43:52.955445] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:01.071 [2024-12-06 17:43:52.956669] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1418320:1 started. 00:28:01.071 [2024-12-06 17:43:52.958560] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:01.071 [2024-12-06 17:43:52.958588] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:01.071 [2024-12-06 17:43:52.962791] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1418320 was disconnected and freed. delete nvme_qpair. 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.330 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.331 [2024-12-06 17:43:53.390078] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14186a0:1 started. 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:01.331 [2024-12-06 17:43:53.393382] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14186a0 was disconnected and freed. delete nvme_qpair. 00:28:01.331 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.590 [2024-12-06 17:43:53.498265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:01.590 [2024-12-06 17:43:53.499266] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:01.590 [2024-12-06 17:43:53.499287] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.590 [2024-12-06 17:43:53.587554] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:01.590 17:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:01.848 [2024-12-06 17:43:53.894240] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:01.848 [2024-12-06 17:43:53.894283] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:01.848 [2024-12-06 17:43:53.894293] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:01.848 [2024-12-06 17:43:53.894298] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.785 [2024-12-06 17:43:54.734212] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:02.785 [2024-12-06 17:43:54.734229] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:02.785 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:02.786 [2024-12-06 17:43:54.739777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.786 [2024-12-06 17:43:54.739791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.786 [2024-12-06 17:43:54.739798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.786 [2024-12-06 17:43:54.739803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.786 [2024-12-06 17:43:54.739809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.786 [2024-12-06 17:43:54.739814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.786 [2024-12-06 17:43:54.739820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.786 [2024-12-06 17:43:54.739829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.786 [2024-12-06 17:43:54.739834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.786 [2024-12-06 17:43:54.749792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.786 [2024-12-06 17:43:54.759824] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.786 [2024-12-06 17:43:54.759833] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.786 [2024-12-06 17:43:54.759838] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.759842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.786 [2024-12-06 17:43:54.759855] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.760053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.786 [2024-12-06 17:43:54.760063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.786 [2024-12-06 17:43:54.760069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.786 [2024-12-06 17:43:54.760077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.786 [2024-12-06 17:43:54.760085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.786 [2024-12-06 17:43:54.760090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.786 [2024-12-06 17:43:54.760096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.786 [2024-12-06 17:43:54.760101] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.786 [2024-12-06 17:43:54.760105] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.786 [2024-12-06 17:43:54.760109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.786 [2024-12-06 17:43:54.769885] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.786 [2024-12-06 17:43:54.769894] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.786 [2024-12-06 17:43:54.769898] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.769901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.786 [2024-12-06 17:43:54.769917] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.770206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.786 [2024-12-06 17:43:54.770215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.786 [2024-12-06 17:43:54.770220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.786 [2024-12-06 17:43:54.770228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.786 [2024-12-06 17:43:54.770235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.786 [2024-12-06 17:43:54.770240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.786 [2024-12-06 17:43:54.770245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.786 [2024-12-06 17:43:54.770249] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.786 [2024-12-06 17:43:54.770252] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.786 [2024-12-06 17:43:54.770255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.786 [2024-12-06 17:43:54.779946] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.786 [2024-12-06 17:43:54.779956] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.786 [2024-12-06 17:43:54.779960] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.779963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.786 [2024-12-06 17:43:54.779974] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.780271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.786 [2024-12-06 17:43:54.780280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.786 [2024-12-06 17:43:54.780285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.786 [2024-12-06 17:43:54.780293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.786 [2024-12-06 17:43:54.780300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.786 [2024-12-06 17:43:54.780305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.786 [2024-12-06 17:43:54.780310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.786 [2024-12-06 17:43:54.780314] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.786 [2024-12-06 17:43:54.780318] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.786 [2024-12-06 17:43:54.780322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:02.786 [2024-12-06 17:43:54.790003] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.786 [2024-12-06 17:43:54.790013] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.786 [2024-12-06 17:43:54.790016] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.786 [2024-12-06 17:43:54.790019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.786 [2024-12-06 17:43:54.790029] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.786 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:02.786 [2024-12-06 17:43:54.790327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.786 [2024-12-06 17:43:54.790336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.787 [2024-12-06 17:43:54.790341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.787 [2024-12-06 17:43:54.790349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.787 [2024-12-06 17:43:54.790356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.787 [2024-12-06 17:43:54.790361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.787 [2024-12-06 17:43:54.790368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.787 [2024-12-06 17:43:54.790376] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.787 [2024-12-06 17:43:54.790380] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.787 [2024-12-06 17:43:54.790383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:02.787 [2024-12-06 17:43:54.800113] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.787 [2024-12-06 17:43:54.800123] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.787 [2024-12-06 17:43:54.800126] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.800129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.787 [2024-12-06 17:43:54.800140] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.800313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.787 [2024-12-06 17:43:54.800326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.787 [2024-12-06 17:43:54.800331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.787 [2024-12-06 17:43:54.800338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.787 [2024-12-06 17:43:54.800346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.787 [2024-12-06 17:43:54.800350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.787 [2024-12-06 17:43:54.800355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.787 [2024-12-06 17:43:54.800359] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.787 [2024-12-06 17:43:54.800363] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.787 [2024-12-06 17:43:54.800366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.787 [2024-12-06 17:43:54.810169] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.787 [2024-12-06 17:43:54.810177] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.787 [2024-12-06 17:43:54.810180] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.810183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.787 [2024-12-06 17:43:54.810192] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.810497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.787 [2024-12-06 17:43:54.810505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.787 [2024-12-06 17:43:54.810510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.787 [2024-12-06 17:43:54.810518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.787 [2024-12-06 17:43:54.810529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.787 [2024-12-06 17:43:54.810533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.787 [2024-12-06 17:43:54.810538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.787 [2024-12-06 17:43:54.810543] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.787 [2024-12-06 17:43:54.810546] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.787 [2024-12-06 17:43:54.810549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.787 [2024-12-06 17:43:54.820221] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.787 [2024-12-06 17:43:54.820228] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.787 [2024-12-06 17:43:54.820232] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.820235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.787 [2024-12-06 17:43:54.820244] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.820532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.787 [2024-12-06 17:43:54.820540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.787 [2024-12-06 17:43:54.820545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.787 [2024-12-06 17:43:54.820553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.787 [2024-12-06 17:43:54.820578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.787 [2024-12-06 17:43:54.820584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.787 [2024-12-06 17:43:54.820589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.787 [2024-12-06 17:43:54.820593] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.787 [2024-12-06 17:43:54.820597] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.787 [2024-12-06 17:43:54.820600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.787 [2024-12-06 17:43:54.830272] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.787 [2024-12-06 17:43:54.830281] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.787 [2024-12-06 17:43:54.830284] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.830288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.787 [2024-12-06 17:43:54.830298] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.787 [2024-12-06 17:43:54.830597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.787 [2024-12-06 17:43:54.830605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.787 [2024-12-06 17:43:54.830610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.787 [2024-12-06 17:43:54.830618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.787 [2024-12-06 17:43:54.830630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.787 [2024-12-06 17:43:54.830635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.787 [2024-12-06 17:43:54.830644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.787 [2024-12-06 17:43:54.830648] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.787 [2024-12-06 17:43:54.830652] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.787 [2024-12-06 17:43:54.830655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.787 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:02.788 [2024-12-06 17:43:54.840326] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:02.788 [2024-12-06 17:43:54.840335] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:02.788 [2024-12-06 17:43:54.840341] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:02.788 [2024-12-06 17:43:54.840345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:02.788 [2024-12-06 17:43:54.840354] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:02.788 [2024-12-06 17:43:54.840645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.788 [2024-12-06 17:43:54.840655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:02.788 [2024-12-06 17:43:54.840660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:02.788 [2024-12-06 17:43:54.840667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:02.788 [2024-12-06 17:43:54.840678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:02.788 [2024-12-06 17:43:54.840683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:02.788 [2024-12-06 17:43:54.840688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:02.788 [2024-12-06 17:43:54.840691] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:02.788 [2024-12-06 17:43:54.840695] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:02.788 [2024-12-06 17:43:54.840698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:02.788 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:03.047 [2024-12-06 17:43:54.850382] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:03.047 [2024-12-06 17:43:54.850391] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:03.047 [2024-12-06 17:43:54.850394] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:03.047 [2024-12-06 17:43:54.850397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:03.047 [2024-12-06 17:43:54.850406] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:03.047 [2024-12-06 17:43:54.850869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-12-06 17:43:54.850904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:03.047 [2024-12-06 17:43:54.850913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:03.047 [2024-12-06 17:43:54.850926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:03.047 [2024-12-06 17:43:54.850946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:03.047 [2024-12-06 17:43:54.850952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:03.047 [2024-12-06 17:43:54.850957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:03.047 [2024-12-06 17:43:54.850962] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:03.047 [2024-12-06 17:43:54.850966] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:03.047 [2024-12-06 17:43:54.850969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:03.047 [2024-12-06 17:43:54.860436] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:03.047 [2024-12-06 17:43:54.860447] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:03.047 [2024-12-06 17:43:54.860450] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:03.047 [2024-12-06 17:43:54.860453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:03.047 [2024-12-06 17:43:54.860465] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:03.047 [2024-12-06 17:43:54.860841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.047 [2024-12-06 17:43:54.860871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ea470 with addr=10.0.0.2, port=4420 00:28:03.047 [2024-12-06 17:43:54.860879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ea470 is same with the state(6) to be set 00:28:03.047 [2024-12-06 17:43:54.860893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ea470 (9): Bad file descriptor 00:28:03.047 [2024-12-06 17:43:54.860901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:03.047 [2024-12-06 17:43:54.860906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:03.047 [2024-12-06 17:43:54.860912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:03.047 [2024-12-06 17:43:54.860917] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:03.047 [2024-12-06 17:43:54.860921] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:03.047 [2024-12-06 17:43:54.860924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:03.047 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.047 [2024-12-06 17:43:54.863981] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:03.047 [2024-12-06 17:43:54.863995] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:03.047 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:28:03.047 17:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:03.985 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:04.244 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.245 17:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.184 [2024-12-06 17:43:57.182901] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:05.184 [2024-12-06 17:43:57.182915] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:05.184 [2024-12-06 17:43:57.182923] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:05.477 [2024-12-06 17:43:57.270167] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:05.477 [2024-12-06 17:43:57.369909] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:05.477 [2024-12-06 17:43:57.370561] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14243e0:1 started. 00:28:05.477 [2024-12-06 17:43:57.371889] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:05.477 [2024-12-06 17:43:57.371909] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:05.477 [2024-12-06 17:43:57.373998] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14243e0 was disconnected and freed. delete nvme_qpair. 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.477 request: 00:28:05.477 { 00:28:05.477 "name": "nvme", 00:28:05.477 "trtype": "tcp", 00:28:05.477 "traddr": "10.0.0.2", 00:28:05.477 "adrfam": "ipv4", 00:28:05.477 "trsvcid": "8009", 00:28:05.477 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:05.477 "wait_for_attach": true, 00:28:05.477 "method": "bdev_nvme_start_discovery", 00:28:05.477 "req_id": 1 00:28:05.477 } 00:28:05.477 Got JSON-RPC error response 00:28:05.477 response: 00:28:05.477 { 00:28:05.477 "code": -17, 00:28:05.477 "message": "File exists" 00:28:05.477 } 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:05.477 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.478 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.478 request: 00:28:05.478 { 00:28:05.478 "name": "nvme_second", 00:28:05.478 "trtype": "tcp", 00:28:05.478 "traddr": "10.0.0.2", 00:28:05.478 "adrfam": "ipv4", 00:28:05.478 "trsvcid": "8009", 00:28:05.478 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:05.478 "wait_for_attach": true, 00:28:05.478 "method": "bdev_nvme_start_discovery", 00:28:05.478 "req_id": 1 00:28:05.478 } 00:28:05.478 Got JSON-RPC error response 00:28:05.478 response: 00:28:05.478 { 00:28:05.478 "code": -17, 00:28:05.478 "message": "File exists" 00:28:05.478 } 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.811 17:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:06.751 [2024-12-06 17:43:58.635223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.751 [2024-12-06 17:43:58.635245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1423a20 with addr=10.0.0.2, port=8010 00:28:06.751 [2024-12-06 17:43:58.635256] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:06.751 [2024-12-06 17:43:58.635265] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:06.751 [2024-12-06 17:43:58.635270] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:07.690 [2024-12-06 17:43:59.637612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.690 [2024-12-06 17:43:59.637631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1423a20 with addr=10.0.0.2, port=8010 00:28:07.690 [2024-12-06 17:43:59.637642] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:07.690 [2024-12-06 17:43:59.637647] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:07.690 [2024-12-06 17:43:59.637651] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:08.629 [2024-12-06 17:44:00.639683] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:08.629 request: 00:28:08.629 { 00:28:08.629 "name": "nvme_second", 00:28:08.629 "trtype": "tcp", 00:28:08.629 "traddr": "10.0.0.2", 00:28:08.629 "adrfam": "ipv4", 00:28:08.629 "trsvcid": "8010", 00:28:08.629 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:08.629 "wait_for_attach": false, 00:28:08.629 "attach_timeout_ms": 3000, 00:28:08.629 "method": "bdev_nvme_start_discovery", 00:28:08.629 "req_id": 1 00:28:08.629 } 00:28:08.629 Got JSON-RPC error response 00:28:08.629 response: 00:28:08.629 { 00:28:08.629 "code": -110, 00:28:08.629 "message": "Connection timed out" 00:28:08.629 } 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:08.629 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1709645 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.891 rmmod nvme_tcp 00:28:08.891 rmmod nvme_fabrics 00:28:08.891 rmmod nvme_keyring 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1709611 ']' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1709611 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1709611 ']' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1709611 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709611 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709611' 00:28:08.891 killing process with pid 1709611 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1709611 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1709611 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.891 17:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.447 00:28:11.447 real 0m20.869s 00:28:11.447 user 0m24.884s 00:28:11.447 sys 0m7.184s 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.447 ************************************ 00:28:11.447 END TEST nvmf_host_discovery 00:28:11.447 ************************************ 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.447 ************************************ 00:28:11.447 START TEST nvmf_host_multipath_status 00:28:11.447 ************************************ 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:11.447 * Looking for test storage... 00:28:11.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.447 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.448 --rc genhtml_branch_coverage=1 00:28:11.448 --rc genhtml_function_coverage=1 00:28:11.448 --rc genhtml_legend=1 00:28:11.448 --rc geninfo_all_blocks=1 00:28:11.448 --rc geninfo_unexecuted_blocks=1 00:28:11.448 00:28:11.448 ' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.448 --rc genhtml_branch_coverage=1 00:28:11.448 --rc genhtml_function_coverage=1 00:28:11.448 --rc genhtml_legend=1 00:28:11.448 --rc geninfo_all_blocks=1 00:28:11.448 --rc geninfo_unexecuted_blocks=1 00:28:11.448 00:28:11.448 ' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.448 --rc genhtml_branch_coverage=1 00:28:11.448 --rc genhtml_function_coverage=1 00:28:11.448 --rc genhtml_legend=1 00:28:11.448 --rc geninfo_all_blocks=1 00:28:11.448 --rc geninfo_unexecuted_blocks=1 00:28:11.448 00:28:11.448 ' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:11.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.448 --rc genhtml_branch_coverage=1 00:28:11.448 --rc genhtml_function_coverage=1 00:28:11.448 --rc genhtml_legend=1 00:28:11.448 --rc geninfo_all_blocks=1 00:28:11.448 --rc geninfo_unexecuted_blocks=1 00:28:11.448 00:28:11.448 ' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.448 17:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:19.621 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:19.621 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:19.621 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:19.621 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.621 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:28:19.622 00:28:19.622 --- 10.0.0.2 ping statistics --- 00:28:19.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.622 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:28:19.622 00:28:19.622 --- 10.0.0.1 ping statistics --- 00:28:19.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.622 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1712393 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1712393 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1712393 ']' 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.622 17:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.622 [2024-12-06 17:44:10.783232] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:28:19.622 [2024-12-06 17:44:10.783321] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.622 [2024-12-06 17:44:10.883522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:19.622 [2024-12-06 17:44:10.934507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.622 [2024-12-06 17:44:10.934559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.622 [2024-12-06 17:44:10.934568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.622 [2024-12-06 17:44:10.934575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.622 [2024-12-06 17:44:10.934582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.622 [2024-12-06 17:44:10.936199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.622 [2024-12-06 17:44:10.936203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1712393 00:28:19.622 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:19.883 [2024-12-06 17:44:11.792782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.883 17:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:20.143 Malloc0 00:28:20.143 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:20.402 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.402 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.661 [2024-12-06 17:44:12.602679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.661 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:20.921 [2024-12-06 17:44:12.795232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1712440 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1712440 /var/tmp/bdevperf.sock 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1712440 ']' 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:20.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.921 17:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 17:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.859 17:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:21.859 17:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:21.860 17:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:22.429 Nvme0n1 00:28:22.429 17:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:22.689 Nvme0n1 00:28:22.689 17:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:22.689 17:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:24.604 17:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:24.604 17:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:24.864 17:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:25.124 17:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:26.065 17:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:26.065 17:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:26.065 17:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.065 17:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.326 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:26.586 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.586 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:26.586 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.586 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.845 17:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:27.104 17:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.104 17:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:27.104 17:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:27.364 17:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:27.624 17:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:28.563 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:28.563 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:28.563 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.563 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.823 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:29.083 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.083 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:29.083 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.083 17:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.343 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:29.603 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.603 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:29.603 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:29.862 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:29.862 17:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:31.244 17:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:31.244 17:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:31.244 17:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.244 17:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.244 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:31.504 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.504 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:31.504 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:31.504 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.764 17:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:32.024 17:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:32.024 17:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:32.024 17:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:32.284 17:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:32.543 17:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:33.482 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:33.482 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:33.482 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.482 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:33.741 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.742 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:34.001 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.001 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:34.001 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.001 17:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:34.261 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:34.521 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:34.521 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:34.521 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:34.783 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:35.044 17:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:35.984 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:35.985 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:35.985 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.985 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:35.985 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:35.985 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:35.985 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.985 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:36.244 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:36.244 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:36.244 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.244 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:36.505 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.505 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:36.505 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.505 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.765 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:37.025 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:37.025 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:37.025 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:37.285 17:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:37.285 17:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:38.666 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:38.666 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.667 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:38.927 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.927 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:38.927 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.927 17:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:39.188 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:39.449 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.449 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:39.709 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:39.709 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:39.969 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:39.969 17:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:41.353 17:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:41.353 17:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:41.353 17:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.353 17:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.353 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:41.613 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.613 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:41.613 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.613 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.874 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:42.135 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.135 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:42.135 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:42.462 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:42.462 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:43.520 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:43.520 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:43.520 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.520 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.780 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:44.040 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.040 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:44.040 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.040 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:44.299 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.299 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:44.299 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.299 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:44.559 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:44.820 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:45.084 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:46.027 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:46.027 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:46.027 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.027 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.288 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:46.549 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.549 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:46.549 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.549 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.810 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:47.070 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.070 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:47.070 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:47.331 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:47.331 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.713 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:48.974 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.974 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:48.974 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.974 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.234 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1712440 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1712440 ']' 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1712440 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1712440 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1712440' 00:28:49.494 killing process with pid 1712440 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1712440 00:28:49.494 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1712440 00:28:49.494 { 00:28:49.494 "results": [ 00:28:49.494 { 00:28:49.494 "job": "Nvme0n1", 00:28:49.494 "core_mask": "0x4", 00:28:49.494 "workload": "verify", 00:28:49.494 "status": "terminated", 00:28:49.494 "verify_range": { 00:28:49.494 "start": 0, 00:28:49.494 "length": 16384 00:28:49.494 }, 00:28:49.494 "queue_depth": 128, 00:28:49.494 "io_size": 4096, 00:28:49.494 "runtime": 26.821044, 00:28:49.494 "iops": 11947.633358343546, 00:28:49.494 "mibps": 46.67044280602948, 00:28:49.494 "io_failed": 0, 00:28:49.494 "io_timeout": 0, 00:28:49.494 "avg_latency_us": 10692.958314359896, 00:28:49.494 "min_latency_us": 351.5733333333333, 00:28:49.494 "max_latency_us": 3075822.933333333 00:28:49.494 } 00:28:49.494 ], 00:28:49.494 "core_count": 1 00:28:49.494 } 00:28:49.787 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1712440 00:28:49.787 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:49.787 [2024-12-06 17:44:12.871786] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:28:49.787 [2024-12-06 17:44:12.871862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712440 ] 00:28:49.788 [2024-12-06 17:44:12.965135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.788 [2024-12-06 17:44:13.014997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.788 Running I/O for 90 seconds... 00:28:49.788 10687.00 IOPS, 41.75 MiB/s [2024-12-06T16:44:41.854Z] 10959.50 IOPS, 42.81 MiB/s [2024-12-06T16:44:41.854Z] 11039.67 IOPS, 43.12 MiB/s [2024-12-06T16:44:41.854Z] 11479.75 IOPS, 44.84 MiB/s [2024-12-06T16:44:41.854Z] 11764.60 IOPS, 45.96 MiB/s [2024-12-06T16:44:41.854Z] 11965.67 IOPS, 46.74 MiB/s [2024-12-06T16:44:41.854Z] 12100.86 IOPS, 47.27 MiB/s [2024-12-06T16:44:41.854Z] 12192.38 IOPS, 47.63 MiB/s [2024-12-06T16:44:41.854Z] 12270.33 IOPS, 47.93 MiB/s [2024-12-06T16:44:41.854Z] 12340.10 IOPS, 48.20 MiB/s [2024-12-06T16:44:41.854Z] 12396.91 IOPS, 48.43 MiB/s [2024-12-06T16:44:41.854Z] [2024-12-06 17:44:26.634929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.788 [2024-12-06 17:44:26.634964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.634981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.634987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.634998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.788 [2024-12-06 17:44:26.635631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.788 [2024-12-06 17:44:26.635646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.635992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.635998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.789 [2024-12-06 17:44:26.636202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.789 [2024-12-06 17:44:26.636213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.636909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.636989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.636999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.637004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.790 [2024-12-06 17:44:26.637020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.790 [2024-12-06 17:44:26.637145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.790 [2024-12-06 17:44:26.637156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.791 [2024-12-06 17:44:26.637927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.637984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.637989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.791 [2024-12-06 17:44:26.638131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.791 [2024-12-06 17:44:26.638141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.638635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.638650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.792 [2024-12-06 17:44:26.649813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.792 [2024-12-06 17:44:26.649853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.649987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.649993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.793 [2024-12-06 17:44:26.650881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.793 [2024-12-06 17:44:26.650954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.793 [2024-12-06 17:44:26.650959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.650969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.650987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.650992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.794 [2024-12-06 17:44:26.651507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.794 [2024-12-06 17:44:26.651517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.794 [2024-12-06 17:44:26.651522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.651532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.651537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.651548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.651554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.651565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.651570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.651580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.651596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.651601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.652027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.652044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.795 [2024-12-06 17:44:26.652059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.795 [2024-12-06 17:44:26.652499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.795 [2024-12-06 17:44:26.652509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.652986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.652996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.653126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.653140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.796 [2024-12-06 17:44:26.660826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.796 [2024-12-06 17:44:26.660847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.796 [2024-12-06 17:44:26.660861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.796 [2024-12-06 17:44:26.660868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.660882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.660888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.660906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.660913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.660927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.660935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.660949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.660956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.660970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.660977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.660991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.660997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.661310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.661386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.661393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.662018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.662042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.797 [2024-12-06 17:44:26.662063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.662083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.662104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.797 [2024-12-06 17:44:26.662125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.797 [2024-12-06 17:44:26.662139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.798 [2024-12-06 17:44:26.662778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.798 [2024-12-06 17:44:26.662918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.798 [2024-12-06 17:44:26.662925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.662938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.662945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.662959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.662966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.662980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.662987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.663984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.663994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.799 [2024-12-06 17:44:26.664360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.799 [2024-12-06 17:44:26.664369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.664920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.664948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.664976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.664995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.800 [2024-12-06 17:44:26.665345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.665373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.800 [2024-12-06 17:44:26.665402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.800 [2024-12-06 17:44:26.665420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.665430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.665458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.665486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.665515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.665546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.665576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.665605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.665633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.665657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.665666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.666412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.666442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.666472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.666500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.666765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.666985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.666997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.801 [2024-12-06 17:44:26.667225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.801 [2024-12-06 17:44:26.667254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.801 [2024-12-06 17:44:26.667273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.802 [2024-12-06 17:44:26.667481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.667978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.667988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.802 [2024-12-06 17:44:26.668245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.802 [2024-12-06 17:44:26.668264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.668274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.668294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.668303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.668323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.668333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.669986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.669996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.670025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.670053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.670081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.670110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.803 [2024-12-06 17:44:26.670138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.803 [2024-12-06 17:44:26.670166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.803 [2024-12-06 17:44:26.670185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.670563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.670592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.670622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.670654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.670683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.670711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.670740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.670758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.675624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.675684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.675696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.675716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.675725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.675744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.675754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.676519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.676549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.676577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.676604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.676636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.804 [2024-12-06 17:44:26.676890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.804 [2024-12-06 17:44:26.676936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.804 [2024-12-06 17:44:26.676945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.676965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.676974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.676992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.805 [2024-12-06 17:44:26.677574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.677979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.805 [2024-12-06 17:44:26.677988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.805 [2024-12-06 17:44:26.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.678331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.678341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.806 [2024-12-06 17:44:26.679698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.806 [2024-12-06 17:44:26.679716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.679988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.679997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.807 [2024-12-06 17:44:26.680707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.680725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.680736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.681426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.807 [2024-12-06 17:44:26.681439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.807 [2024-12-06 17:44:26.681460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.681469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.681497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.681524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.681551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.681579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.681833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.681986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.681994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.808 [2024-12-06 17:44:26.682267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.808 [2024-12-06 17:44:26.682512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.808 [2024-12-06 17:44:26.682529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.682978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.682996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.683245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.683254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.809 [2024-12-06 17:44:26.684324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.809 [2024-12-06 17:44:26.684344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.684983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.684992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.810 [2024-12-06 17:44:26.685019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.810 [2024-12-06 17:44:26.685376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.810 [2024-12-06 17:44:26.685394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.685403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.685430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.685653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.685662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.811 [2024-12-06 17:44:26.686713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.811 [2024-12-06 17:44:26.686947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.811 [2024-12-06 17:44:26.686955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.686970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.686977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.686991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.686999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.812 [2024-12-06 17:44:26.687263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.812 [2024-12-06 17:44:26.687809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.812 [2024-12-06 17:44:26.687816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.687831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.687838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.813 [2024-12-06 17:44:26.690889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.813 [2024-12-06 17:44:26.690903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.690912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.690926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.690933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.690948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.690955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.690969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.690977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.690991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.690999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.814 12382.92 IOPS, 48.37 MiB/s [2024-12-06T16:44:41.880Z] [2024-12-06 17:44:26.691945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.691957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.691980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.691994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.814 [2024-12-06 17:44:26.692293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.814 [2024-12-06 17:44:26.692307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.814 [2024-12-06 17:44:26.692314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.815 [2024-12-06 17:44:26.692869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.692981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.692996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.693003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.693018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.693025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.693040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.693047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.693062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.693069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.693083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.815 [2024-12-06 17:44:26.693091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.815 [2024-12-06 17:44:26.693106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.693416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.693423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.816 [2024-12-06 17:44:26.694514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.816 [2024-12-06 17:44:26.694522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.694880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.694924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.694945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.694966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.694981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.694988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.817 [2024-12-06 17:44:26.695207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.695228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.695250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.695272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.695293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.817 [2024-12-06 17:44:26.695316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.817 [2024-12-06 17:44:26.695331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.695338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.695885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.695895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.695910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.695919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.695933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.695941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.695962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.695976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.695984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.695998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.818 [2024-12-06 17:44:26.696618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.818 [2024-12-06 17:44:26.696691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.818 [2024-12-06 17:44:26.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.819 [2024-12-06 17:44:26.696712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.819 [2024-12-06 17:44:26.696734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.819 [2024-12-06 17:44:26.696755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.819 [2024-12-06 17:44:26.696777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.819 [2024-12-06 17:44:26.696798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.819 [2024-12-06 17:44:26.696820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.696986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.696994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.819 [2024-12-06 17:44:26.697977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.819 [2024-12-06 17:44:26.697987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.820 [2024-12-06 17:44:26.698510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.820 [2024-12-06 17:44:26.698525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:49.820 [2024-12-06 17:44:26.698536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.820 [2024-12-06 17:44:26.698542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.698747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.698992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.698997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.821 [2024-12-06 17:44:26.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:49.821 [2024-12-06 17:44:26.699253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.821 [2024-12-06 17:44:26.699259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.822 [2024-12-06 17:44:26.699889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.699986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.699992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.700007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.700012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.700027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.700033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.700048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.700053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.700068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.700073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.700089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.700094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:49.822 [2024-12-06 17:44:26.700109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.822 [2024-12-06 17:44:26.700114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:26.700377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:26.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.823 11430.38 IOPS, 44.65 MiB/s [2024-12-06T16:44:41.889Z] 10613.93 IOPS, 41.46 MiB/s [2024-12-06T16:44:41.889Z] 9906.33 IOPS, 38.70 MiB/s [2024-12-06T16:44:41.889Z] 10095.12 IOPS, 39.43 MiB/s [2024-12-06T16:44:41.889Z] 10254.76 IOPS, 40.06 MiB/s [2024-12-06T16:44:41.889Z] 10599.94 IOPS, 41.41 MiB/s [2024-12-06T16:44:41.889Z] 10938.32 IOPS, 42.73 MiB/s [2024-12-06T16:44:41.889Z] 11162.95 IOPS, 43.61 MiB/s [2024-12-06T16:44:41.889Z] 11240.57 IOPS, 43.91 MiB/s [2024-12-06T16:44:41.889Z] 11312.32 IOPS, 44.19 MiB/s [2024-12-06T16:44:41.889Z] 11516.26 IOPS, 44.99 MiB/s [2024-12-06T16:44:41.889Z] 11741.58 IOPS, 45.87 MiB/s [2024-12-06T16:44:41.889Z] [2024-12-06 17:44:39.365593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:39.365631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:39.367128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:39.367147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:39.367259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:39.367280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.823 [2024-12-06 17:44:39.367296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:49.823 [2024-12-06 17:44:39.367484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.823 [2024-12-06 17:44:39.367489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:49.823 11889.60 IOPS, 46.44 MiB/s [2024-12-06T16:44:41.889Z] 11924.12 IOPS, 46.58 MiB/s [2024-12-06T16:44:41.889Z] Received shutdown signal, test time was about 26.821656 seconds 00:28:49.823 00:28:49.823 Latency(us) 00:28:49.823 [2024-12-06T16:44:41.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.823 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:49.823 Verification LBA range: start 0x0 length 0x4000 00:28:49.823 Nvme0n1 : 26.82 11947.63 46.67 0.00 0.00 10692.96 351.57 3075822.93 00:28:49.823 [2024-12-06T16:44:41.889Z] =================================================================================================================== 00:28:49.823 [2024-12-06T16:44:41.889Z] Total : 11947.63 46.67 0.00 0.00 10692.96 351.57 3075822.93 00:28:49.823 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.083 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.084 rmmod nvme_tcp 00:28:50.084 rmmod nvme_fabrics 00:28:50.084 rmmod nvme_keyring 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1712393 ']' 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1712393 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1712393 ']' 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1712393 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1712393 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1712393' 00:28:50.084 killing process with pid 1712393 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1712393 00:28:50.084 17:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1712393 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.084 17:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.625 00:28:52.625 real 0m41.042s 00:28:52.625 user 1m46.609s 00:28:52.625 sys 0m11.339s 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:52.625 ************************************ 00:28:52.625 END TEST nvmf_host_multipath_status 00:28:52.625 ************************************ 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.625 ************************************ 00:28:52.625 START TEST nvmf_discovery_remove_ifc 00:28:52.625 ************************************ 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:52.625 * Looking for test storage... 00:28:52.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.625 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:52.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.626 --rc genhtml_branch_coverage=1 00:28:52.626 --rc genhtml_function_coverage=1 00:28:52.626 --rc genhtml_legend=1 00:28:52.626 --rc geninfo_all_blocks=1 00:28:52.626 --rc geninfo_unexecuted_blocks=1 00:28:52.626 00:28:52.626 ' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:52.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.626 --rc genhtml_branch_coverage=1 00:28:52.626 --rc genhtml_function_coverage=1 00:28:52.626 --rc genhtml_legend=1 00:28:52.626 --rc geninfo_all_blocks=1 00:28:52.626 --rc geninfo_unexecuted_blocks=1 00:28:52.626 00:28:52.626 ' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:52.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.626 --rc genhtml_branch_coverage=1 00:28:52.626 --rc genhtml_function_coverage=1 00:28:52.626 --rc genhtml_legend=1 00:28:52.626 --rc geninfo_all_blocks=1 00:28:52.626 --rc geninfo_unexecuted_blocks=1 00:28:52.626 00:28:52.626 ' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:52.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.626 --rc genhtml_branch_coverage=1 00:28:52.626 --rc genhtml_function_coverage=1 00:28:52.626 --rc genhtml_legend=1 00:28:52.626 --rc geninfo_all_blocks=1 00:28:52.626 --rc geninfo_unexecuted_blocks=1 00:28:52.626 00:28:52.626 ' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:52.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.626 17:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.767 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:00.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:00.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:00.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:00.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:29:00.768 00:29:00.768 --- 10.0.0.2 ping statistics --- 00:29:00.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.768 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:29:00.768 00:29:00.768 --- 10.0.0.1 ping statistics --- 00:29:00.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.768 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1715459 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1715459 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1715459 ']' 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.768 17:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.768 [2024-12-06 17:44:51.865500] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:29:00.768 [2024-12-06 17:44:51.865570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.769 [2024-12-06 17:44:51.962669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.769 [2024-12-06 17:44:52.012038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.769 [2024-12-06 17:44:52.012089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.769 [2024-12-06 17:44:52.012097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.769 [2024-12-06 17:44:52.012104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.769 [2024-12-06 17:44:52.012111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.769 [2024-12-06 17:44:52.012892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.769 [2024-12-06 17:44:52.727711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.769 [2024-12-06 17:44:52.735932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:00.769 null0 00:29:00.769 [2024-12-06 17:44:52.767919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1715492 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1715492 /tmp/host.sock 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1715492 ']' 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:00.769 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.769 17:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.029 [2024-12-06 17:44:52.855180] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:29:01.029 [2024-12-06 17:44:52.855244] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715492 ] 00:29:01.029 [2024-12-06 17:44:52.947979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.029 [2024-12-06 17:44:53.000464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.601 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.601 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:01.601 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.601 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:01.601 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.601 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.861 17:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:02.800 [2024-12-06 17:44:54.815852] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:02.800 [2024-12-06 17:44:54.815873] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:02.800 [2024-12-06 17:44:54.815887] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:03.060 [2024-12-06 17:44:54.902164] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:03.060 [2024-12-06 17:44:55.077321] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:03.060 [2024-12-06 17:44:55.078349] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x98fed0:1 started. 00:29:03.060 [2024-12-06 17:44:55.079923] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:03.060 [2024-12-06 17:44:55.079969] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:03.060 [2024-12-06 17:44:55.079992] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:03.060 [2024-12-06 17:44:55.080006] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:03.060 [2024-12-06 17:44:55.080026] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:03.060 [2024-12-06 17:44:55.085875] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x98fed0 was disconnected and freed. delete nvme_qpair. 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:03.060 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:03.319 17:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.254 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.254 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.254 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.254 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.254 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.255 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:04.255 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.514 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.514 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:04.514 17:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:05.453 17:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:06.390 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:06.391 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.649 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:06.649 17:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:07.587 17:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:08.525 [2024-12-06 17:45:00.520530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:08.525 [2024-12-06 17:45:00.520571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.525 [2024-12-06 17:45:00.520581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.525 [2024-12-06 17:45:00.520589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.525 [2024-12-06 17:45:00.520595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.525 [2024-12-06 17:45:00.520601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.525 [2024-12-06 17:45:00.520606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.525 [2024-12-06 17:45:00.520616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.525 [2024-12-06 17:45:00.520622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.525 [2024-12-06 17:45:00.520628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.525 [2024-12-06 17:45:00.520633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.525 [2024-12-06 17:45:00.520641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c6d0 is same with the state(6) to be set 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:08.525 [2024-12-06 17:45:00.530551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c6d0 (9): Bad file descriptor 00:29:08.525 [2024-12-06 17:45:00.540584] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:08.525 [2024-12-06 17:45:00.540594] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:08.525 [2024-12-06 17:45:00.540599] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:08.525 [2024-12-06 17:45:00.540603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:08.525 [2024-12-06 17:45:00.540620] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:08.525 17:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:09.905 [2024-12-06 17:45:01.600653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:09.905 [2024-12-06 17:45:01.600711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c6d0 with addr=10.0.0.2, port=4420 00:29:09.905 [2024-12-06 17:45:01.600734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c6d0 is same with the state(6) to be set 00:29:09.905 [2024-12-06 17:45:01.600772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c6d0 (9): Bad file descriptor 00:29:09.905 [2024-12-06 17:45:01.601511] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:09.905 [2024-12-06 17:45:01.601559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:09.905 [2024-12-06 17:45:01.601575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:09.905 [2024-12-06 17:45:01.601591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:09.905 [2024-12-06 17:45:01.601604] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:09.905 [2024-12-06 17:45:01.601616] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:09.905 [2024-12-06 17:45:01.601625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:09.905 [2024-12-06 17:45:01.601662] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:09.905 [2024-12-06 17:45:01.601672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:09.905 17:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:10.843 [2024-12-06 17:45:02.604060] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:10.843 [2024-12-06 17:45:02.604076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:10.843 [2024-12-06 17:45:02.604086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:10.843 [2024-12-06 17:45:02.604092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:10.843 [2024-12-06 17:45:02.604098] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:10.843 [2024-12-06 17:45:02.604103] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:10.843 [2024-12-06 17:45:02.604107] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:10.843 [2024-12-06 17:45:02.604110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:10.843 [2024-12-06 17:45:02.604126] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:10.843 [2024-12-06 17:45:02.604142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.843 [2024-12-06 17:45:02.604149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.843 [2024-12-06 17:45:02.604156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.843 [2024-12-06 17:45:02.604162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.843 [2024-12-06 17:45:02.604167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.843 [2024-12-06 17:45:02.604173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.843 [2024-12-06 17:45:02.604178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.843 [2024-12-06 17:45:02.604186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.843 [2024-12-06 17:45:02.604192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.843 [2024-12-06 17:45:02.604198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.843 [2024-12-06 17:45:02.604202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:10.843 [2024-12-06 17:45:02.604860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95bdf0 (9): Bad file descriptor 00:29:10.843 [2024-12-06 17:45:02.605869] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:10.843 [2024-12-06 17:45:02.605877] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:10.843 17:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:11.780 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:12.039 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.040 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:12.040 17:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:12.608 [2024-12-06 17:45:04.662823] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:12.608 [2024-12-06 17:45:04.662839] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:12.608 [2024-12-06 17:45:04.662849] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:12.867 [2024-12-06 17:45:04.749086] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:12.867 [2024-12-06 17:45:04.851882] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:12.867 [2024-12-06 17:45:04.852569] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x999640:1 started. 00:29:12.867 [2024-12-06 17:45:04.853473] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:12.867 [2024-12-06 17:45:04.853501] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:12.867 [2024-12-06 17:45:04.853515] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:12.867 [2024-12-06 17:45:04.853526] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:12.867 [2024-12-06 17:45:04.853532] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:12.867 [2024-12-06 17:45:04.859725] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x999640 was disconnected and freed. delete nvme_qpair. 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:12.867 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1715492 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1715492 ']' 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1715492 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.137 17:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715492 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715492' 00:29:13.137 killing process with pid 1715492 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1715492 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1715492 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.137 rmmod nvme_tcp 00:29:13.137 rmmod nvme_fabrics 00:29:13.137 rmmod nvme_keyring 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1715459 ']' 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1715459 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1715459 ']' 00:29:13.137 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1715459 00:29:13.138 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:13.138 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.138 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715459 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715459' 00:29:13.398 killing process with pid 1715459 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1715459 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1715459 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.398 17:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.939 00:29:15.939 real 0m23.208s 00:29:15.939 user 0m27.350s 00:29:15.939 sys 0m7.035s 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.939 ************************************ 00:29:15.939 END TEST nvmf_discovery_remove_ifc 00:29:15.939 ************************************ 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.939 ************************************ 00:29:15.939 START TEST nvmf_identify_kernel_target 00:29:15.939 ************************************ 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:15.939 * Looking for test storage... 00:29:15.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:15.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.939 --rc genhtml_branch_coverage=1 00:29:15.939 --rc genhtml_function_coverage=1 00:29:15.939 --rc genhtml_legend=1 00:29:15.939 --rc geninfo_all_blocks=1 00:29:15.939 --rc geninfo_unexecuted_blocks=1 00:29:15.939 00:29:15.939 ' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:15.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.939 --rc genhtml_branch_coverage=1 00:29:15.939 --rc genhtml_function_coverage=1 00:29:15.939 --rc genhtml_legend=1 00:29:15.939 --rc geninfo_all_blocks=1 00:29:15.939 --rc geninfo_unexecuted_blocks=1 00:29:15.939 00:29:15.939 ' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:15.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.939 --rc genhtml_branch_coverage=1 00:29:15.939 --rc genhtml_function_coverage=1 00:29:15.939 --rc genhtml_legend=1 00:29:15.939 --rc geninfo_all_blocks=1 00:29:15.939 --rc geninfo_unexecuted_blocks=1 00:29:15.939 00:29:15.939 ' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:15.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.939 --rc genhtml_branch_coverage=1 00:29:15.939 --rc genhtml_function_coverage=1 00:29:15.939 --rc genhtml_legend=1 00:29:15.939 --rc geninfo_all_blocks=1 00:29:15.939 --rc geninfo_unexecuted_blocks=1 00:29:15.939 00:29:15.939 ' 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.939 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:15.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.940 17:45:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.105 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:24.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:24.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:24.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:24.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.106 17:45:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:29:24.106 00:29:24.106 --- 10.0.0.2 ping statistics --- 00:29:24.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.106 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:29:24.106 00:29:24.106 --- 10.0.0.1 ping statistics --- 00:29:24.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.106 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.106 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:24.107 17:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.651 Waiting for block devices as requested 00:29:26.651 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:26.911 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:26.911 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:26.911 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:27.170 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:27.170 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:27.170 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:27.430 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:27.430 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:27.691 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:27.691 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:27.691 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:27.951 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:27.951 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:27.951 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:28.214 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:28.214 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:28.474 No valid GPT data, bailing 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:28.474 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:28.734 00:29:28.734 Discovery Log Number of Records 2, Generation counter 2 00:29:28.734 =====Discovery Log Entry 0====== 00:29:28.734 trtype: tcp 00:29:28.734 adrfam: ipv4 00:29:28.734 subtype: current discovery subsystem 00:29:28.734 treq: not specified, sq flow control disable supported 00:29:28.734 portid: 1 00:29:28.734 trsvcid: 4420 00:29:28.734 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:28.734 traddr: 10.0.0.1 00:29:28.734 eflags: none 00:29:28.734 sectype: none 00:29:28.734 =====Discovery Log Entry 1====== 00:29:28.734 trtype: tcp 00:29:28.734 adrfam: ipv4 00:29:28.734 subtype: nvme subsystem 00:29:28.734 treq: not specified, sq flow control disable supported 00:29:28.734 portid: 1 00:29:28.734 trsvcid: 4420 00:29:28.734 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:28.734 traddr: 10.0.0.1 00:29:28.734 eflags: none 00:29:28.734 sectype: none 00:29:28.734 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:28.734 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:28.734 ===================================================== 00:29:28.734 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:28.734 ===================================================== 00:29:28.734 Controller Capabilities/Features 00:29:28.734 ================================ 00:29:28.734 Vendor ID: 0000 00:29:28.734 Subsystem Vendor ID: 0000 00:29:28.734 Serial Number: ff46537ae75cbaa1c4fd 00:29:28.734 Model Number: Linux 00:29:28.734 Firmware Version: 6.8.9-20 00:29:28.734 Recommended Arb Burst: 0 00:29:28.734 IEEE OUI Identifier: 00 00 00 00:29:28.734 Multi-path I/O 00:29:28.734 May have multiple subsystem ports: No 00:29:28.734 May have multiple controllers: No 00:29:28.734 Associated with SR-IOV VF: No 00:29:28.734 Max Data Transfer Size: Unlimited 00:29:28.734 Max Number of Namespaces: 0 00:29:28.734 Max Number of I/O Queues: 1024 00:29:28.734 NVMe Specification Version (VS): 1.3 00:29:28.734 NVMe Specification Version (Identify): 1.3 00:29:28.734 Maximum Queue Entries: 1024 00:29:28.734 Contiguous Queues Required: No 00:29:28.734 Arbitration Mechanisms Supported 00:29:28.734 Weighted Round Robin: Not Supported 00:29:28.734 Vendor Specific: Not Supported 00:29:28.734 Reset Timeout: 7500 ms 00:29:28.734 Doorbell Stride: 4 bytes 00:29:28.734 NVM Subsystem Reset: Not Supported 00:29:28.734 Command Sets Supported 00:29:28.734 NVM Command Set: Supported 00:29:28.734 Boot Partition: Not Supported 00:29:28.734 Memory Page Size Minimum: 4096 bytes 00:29:28.734 Memory Page Size Maximum: 4096 bytes 00:29:28.734 Persistent Memory Region: Not Supported 00:29:28.734 Optional Asynchronous Events Supported 00:29:28.734 Namespace Attribute Notices: Not Supported 00:29:28.734 Firmware Activation Notices: Not Supported 00:29:28.734 ANA Change Notices: Not Supported 00:29:28.734 PLE Aggregate Log Change Notices: Not Supported 00:29:28.734 LBA Status Info Alert Notices: Not Supported 00:29:28.734 EGE Aggregate Log Change Notices: Not Supported 00:29:28.734 Normal NVM Subsystem Shutdown event: Not Supported 00:29:28.734 Zone Descriptor Change Notices: Not Supported 00:29:28.734 Discovery Log Change Notices: Supported 00:29:28.734 Controller Attributes 00:29:28.734 128-bit Host Identifier: Not Supported 00:29:28.734 Non-Operational Permissive Mode: Not Supported 00:29:28.734 NVM Sets: Not Supported 00:29:28.734 Read Recovery Levels: Not Supported 00:29:28.734 Endurance Groups: Not Supported 00:29:28.734 Predictable Latency Mode: Not Supported 00:29:28.734 Traffic Based Keep ALive: Not Supported 00:29:28.734 Namespace Granularity: Not Supported 00:29:28.734 SQ Associations: Not Supported 00:29:28.734 UUID List: Not Supported 00:29:28.734 Multi-Domain Subsystem: Not Supported 00:29:28.734 Fixed Capacity Management: Not Supported 00:29:28.734 Variable Capacity Management: Not Supported 00:29:28.734 Delete Endurance Group: Not Supported 00:29:28.734 Delete NVM Set: Not Supported 00:29:28.734 Extended LBA Formats Supported: Not Supported 00:29:28.734 Flexible Data Placement Supported: Not Supported 00:29:28.734 00:29:28.734 Controller Memory Buffer Support 00:29:28.734 ================================ 00:29:28.734 Supported: No 00:29:28.734 00:29:28.734 Persistent Memory Region Support 00:29:28.734 ================================ 00:29:28.734 Supported: No 00:29:28.734 00:29:28.734 Admin Command Set Attributes 00:29:28.734 ============================ 00:29:28.734 Security Send/Receive: Not Supported 00:29:28.734 Format NVM: Not Supported 00:29:28.734 Firmware Activate/Download: Not Supported 00:29:28.734 Namespace Management: Not Supported 00:29:28.734 Device Self-Test: Not Supported 00:29:28.734 Directives: Not Supported 00:29:28.734 NVMe-MI: Not Supported 00:29:28.734 Virtualization Management: Not Supported 00:29:28.734 Doorbell Buffer Config: Not Supported 00:29:28.735 Get LBA Status Capability: Not Supported 00:29:28.735 Command & Feature Lockdown Capability: Not Supported 00:29:28.735 Abort Command Limit: 1 00:29:28.735 Async Event Request Limit: 1 00:29:28.735 Number of Firmware Slots: N/A 00:29:28.735 Firmware Slot 1 Read-Only: N/A 00:29:28.735 Firmware Activation Without Reset: N/A 00:29:28.735 Multiple Update Detection Support: N/A 00:29:28.735 Firmware Update Granularity: No Information Provided 00:29:28.735 Per-Namespace SMART Log: No 00:29:28.735 Asymmetric Namespace Access Log Page: Not Supported 00:29:28.735 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:28.735 Command Effects Log Page: Not Supported 00:29:28.735 Get Log Page Extended Data: Supported 00:29:28.735 Telemetry Log Pages: Not Supported 00:29:28.735 Persistent Event Log Pages: Not Supported 00:29:28.735 Supported Log Pages Log Page: May Support 00:29:28.735 Commands Supported & Effects Log Page: Not Supported 00:29:28.735 Feature Identifiers & Effects Log Page:May Support 00:29:28.735 NVMe-MI Commands & Effects Log Page: May Support 00:29:28.735 Data Area 4 for Telemetry Log: Not Supported 00:29:28.735 Error Log Page Entries Supported: 1 00:29:28.735 Keep Alive: Not Supported 00:29:28.735 00:29:28.735 NVM Command Set Attributes 00:29:28.735 ========================== 00:29:28.735 Submission Queue Entry Size 00:29:28.735 Max: 1 00:29:28.735 Min: 1 00:29:28.735 Completion Queue Entry Size 00:29:28.735 Max: 1 00:29:28.735 Min: 1 00:29:28.735 Number of Namespaces: 0 00:29:28.735 Compare Command: Not Supported 00:29:28.735 Write Uncorrectable Command: Not Supported 00:29:28.735 Dataset Management Command: Not Supported 00:29:28.735 Write Zeroes Command: Not Supported 00:29:28.735 Set Features Save Field: Not Supported 00:29:28.735 Reservations: Not Supported 00:29:28.735 Timestamp: Not Supported 00:29:28.735 Copy: Not Supported 00:29:28.735 Volatile Write Cache: Not Present 00:29:28.735 Atomic Write Unit (Normal): 1 00:29:28.735 Atomic Write Unit (PFail): 1 00:29:28.735 Atomic Compare & Write Unit: 1 00:29:28.735 Fused Compare & Write: Not Supported 00:29:28.735 Scatter-Gather List 00:29:28.735 SGL Command Set: Supported 00:29:28.735 SGL Keyed: Not Supported 00:29:28.735 SGL Bit Bucket Descriptor: Not Supported 00:29:28.735 SGL Metadata Pointer: Not Supported 00:29:28.735 Oversized SGL: Not Supported 00:29:28.735 SGL Metadata Address: Not Supported 00:29:28.735 SGL Offset: Supported 00:29:28.735 Transport SGL Data Block: Not Supported 00:29:28.735 Replay Protected Memory Block: Not Supported 00:29:28.735 00:29:28.735 Firmware Slot Information 00:29:28.735 ========================= 00:29:28.735 Active slot: 0 00:29:28.735 00:29:28.735 00:29:28.735 Error Log 00:29:28.735 ========= 00:29:28.735 00:29:28.735 Active Namespaces 00:29:28.735 ================= 00:29:28.735 Discovery Log Page 00:29:28.735 ================== 00:29:28.735 Generation Counter: 2 00:29:28.735 Number of Records: 2 00:29:28.735 Record Format: 0 00:29:28.735 00:29:28.735 Discovery Log Entry 0 00:29:28.735 ---------------------- 00:29:28.735 Transport Type: 3 (TCP) 00:29:28.735 Address Family: 1 (IPv4) 00:29:28.735 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:28.735 Entry Flags: 00:29:28.735 Duplicate Returned Information: 0 00:29:28.735 Explicit Persistent Connection Support for Discovery: 0 00:29:28.735 Transport Requirements: 00:29:28.735 Secure Channel: Not Specified 00:29:28.735 Port ID: 1 (0x0001) 00:29:28.735 Controller ID: 65535 (0xffff) 00:29:28.735 Admin Max SQ Size: 32 00:29:28.735 Transport Service Identifier: 4420 00:29:28.735 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:28.735 Transport Address: 10.0.0.1 00:29:28.735 Discovery Log Entry 1 00:29:28.735 ---------------------- 00:29:28.735 Transport Type: 3 (TCP) 00:29:28.735 Address Family: 1 (IPv4) 00:29:28.735 Subsystem Type: 2 (NVM Subsystem) 00:29:28.735 Entry Flags: 00:29:28.735 Duplicate Returned Information: 0 00:29:28.735 Explicit Persistent Connection Support for Discovery: 0 00:29:28.735 Transport Requirements: 00:29:28.735 Secure Channel: Not Specified 00:29:28.735 Port ID: 1 (0x0001) 00:29:28.735 Controller ID: 65535 (0xffff) 00:29:28.735 Admin Max SQ Size: 32 00:29:28.735 Transport Service Identifier: 4420 00:29:28.735 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:28.735 Transport Address: 10.0.0.1 00:29:28.735 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:28.995 get_feature(0x01) failed 00:29:28.995 get_feature(0x02) failed 00:29:28.995 get_feature(0x04) failed 00:29:28.995 ===================================================== 00:29:28.995 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:28.995 ===================================================== 00:29:28.995 Controller Capabilities/Features 00:29:28.995 ================================ 00:29:28.995 Vendor ID: 0000 00:29:28.995 Subsystem Vendor ID: 0000 00:29:28.995 Serial Number: c9fb1b09b2844b237894 00:29:28.995 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:28.995 Firmware Version: 6.8.9-20 00:29:28.995 Recommended Arb Burst: 6 00:29:28.995 IEEE OUI Identifier: 00 00 00 00:29:28.995 Multi-path I/O 00:29:28.996 May have multiple subsystem ports: Yes 00:29:28.996 May have multiple controllers: Yes 00:29:28.996 Associated with SR-IOV VF: No 00:29:28.996 Max Data Transfer Size: Unlimited 00:29:28.996 Max Number of Namespaces: 1024 00:29:28.996 Max Number of I/O Queues: 128 00:29:28.996 NVMe Specification Version (VS): 1.3 00:29:28.996 NVMe Specification Version (Identify): 1.3 00:29:28.996 Maximum Queue Entries: 1024 00:29:28.996 Contiguous Queues Required: No 00:29:28.996 Arbitration Mechanisms Supported 00:29:28.996 Weighted Round Robin: Not Supported 00:29:28.996 Vendor Specific: Not Supported 00:29:28.996 Reset Timeout: 7500 ms 00:29:28.996 Doorbell Stride: 4 bytes 00:29:28.996 NVM Subsystem Reset: Not Supported 00:29:28.996 Command Sets Supported 00:29:28.996 NVM Command Set: Supported 00:29:28.996 Boot Partition: Not Supported 00:29:28.996 Memory Page Size Minimum: 4096 bytes 00:29:28.996 Memory Page Size Maximum: 4096 bytes 00:29:28.996 Persistent Memory Region: Not Supported 00:29:28.996 Optional Asynchronous Events Supported 00:29:28.996 Namespace Attribute Notices: Supported 00:29:28.996 Firmware Activation Notices: Not Supported 00:29:28.996 ANA Change Notices: Supported 00:29:28.996 PLE Aggregate Log Change Notices: Not Supported 00:29:28.996 LBA Status Info Alert Notices: Not Supported 00:29:28.996 EGE Aggregate Log Change Notices: Not Supported 00:29:28.996 Normal NVM Subsystem Shutdown event: Not Supported 00:29:28.996 Zone Descriptor Change Notices: Not Supported 00:29:28.996 Discovery Log Change Notices: Not Supported 00:29:28.996 Controller Attributes 00:29:28.996 128-bit Host Identifier: Supported 00:29:28.996 Non-Operational Permissive Mode: Not Supported 00:29:28.996 NVM Sets: Not Supported 00:29:28.996 Read Recovery Levels: Not Supported 00:29:28.996 Endurance Groups: Not Supported 00:29:28.996 Predictable Latency Mode: Not Supported 00:29:28.996 Traffic Based Keep ALive: Supported 00:29:28.996 Namespace Granularity: Not Supported 00:29:28.996 SQ Associations: Not Supported 00:29:28.996 UUID List: Not Supported 00:29:28.996 Multi-Domain Subsystem: Not Supported 00:29:28.996 Fixed Capacity Management: Not Supported 00:29:28.996 Variable Capacity Management: Not Supported 00:29:28.996 Delete Endurance Group: Not Supported 00:29:28.996 Delete NVM Set: Not Supported 00:29:28.996 Extended LBA Formats Supported: Not Supported 00:29:28.996 Flexible Data Placement Supported: Not Supported 00:29:28.996 00:29:28.996 Controller Memory Buffer Support 00:29:28.996 ================================ 00:29:28.996 Supported: No 00:29:28.996 00:29:28.996 Persistent Memory Region Support 00:29:28.996 ================================ 00:29:28.996 Supported: No 00:29:28.996 00:29:28.996 Admin Command Set Attributes 00:29:28.996 ============================ 00:29:28.996 Security Send/Receive: Not Supported 00:29:28.996 Format NVM: Not Supported 00:29:28.996 Firmware Activate/Download: Not Supported 00:29:28.996 Namespace Management: Not Supported 00:29:28.996 Device Self-Test: Not Supported 00:29:28.996 Directives: Not Supported 00:29:28.996 NVMe-MI: Not Supported 00:29:28.996 Virtualization Management: Not Supported 00:29:28.996 Doorbell Buffer Config: Not Supported 00:29:28.996 Get LBA Status Capability: Not Supported 00:29:28.996 Command & Feature Lockdown Capability: Not Supported 00:29:28.996 Abort Command Limit: 4 00:29:28.996 Async Event Request Limit: 4 00:29:28.996 Number of Firmware Slots: N/A 00:29:28.996 Firmware Slot 1 Read-Only: N/A 00:29:28.996 Firmware Activation Without Reset: N/A 00:29:28.996 Multiple Update Detection Support: N/A 00:29:28.996 Firmware Update Granularity: No Information Provided 00:29:28.996 Per-Namespace SMART Log: Yes 00:29:28.996 Asymmetric Namespace Access Log Page: Supported 00:29:28.996 ANA Transition Time : 10 sec 00:29:28.996 00:29:28.996 Asymmetric Namespace Access Capabilities 00:29:28.996 ANA Optimized State : Supported 00:29:28.996 ANA Non-Optimized State : Supported 00:29:28.996 ANA Inaccessible State : Supported 00:29:28.996 ANA Persistent Loss State : Supported 00:29:28.996 ANA Change State : Supported 00:29:28.996 ANAGRPID is not changed : No 00:29:28.996 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:28.996 00:29:28.996 ANA Group Identifier Maximum : 128 00:29:28.996 Number of ANA Group Identifiers : 128 00:29:28.996 Max Number of Allowed Namespaces : 1024 00:29:28.996 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:28.996 Command Effects Log Page: Supported 00:29:28.996 Get Log Page Extended Data: Supported 00:29:28.996 Telemetry Log Pages: Not Supported 00:29:28.996 Persistent Event Log Pages: Not Supported 00:29:28.996 Supported Log Pages Log Page: May Support 00:29:28.996 Commands Supported & Effects Log Page: Not Supported 00:29:28.996 Feature Identifiers & Effects Log Page:May Support 00:29:28.996 NVMe-MI Commands & Effects Log Page: May Support 00:29:28.996 Data Area 4 for Telemetry Log: Not Supported 00:29:28.996 Error Log Page Entries Supported: 128 00:29:28.996 Keep Alive: Supported 00:29:28.996 Keep Alive Granularity: 1000 ms 00:29:28.996 00:29:28.996 NVM Command Set Attributes 00:29:28.996 ========================== 00:29:28.996 Submission Queue Entry Size 00:29:28.996 Max: 64 00:29:28.996 Min: 64 00:29:28.996 Completion Queue Entry Size 00:29:28.996 Max: 16 00:29:28.996 Min: 16 00:29:28.996 Number of Namespaces: 1024 00:29:28.996 Compare Command: Not Supported 00:29:28.996 Write Uncorrectable Command: Not Supported 00:29:28.996 Dataset Management Command: Supported 00:29:28.996 Write Zeroes Command: Supported 00:29:28.996 Set Features Save Field: Not Supported 00:29:28.996 Reservations: Not Supported 00:29:28.996 Timestamp: Not Supported 00:29:28.996 Copy: Not Supported 00:29:28.996 Volatile Write Cache: Present 00:29:28.996 Atomic Write Unit (Normal): 1 00:29:28.996 Atomic Write Unit (PFail): 1 00:29:28.996 Atomic Compare & Write Unit: 1 00:29:28.996 Fused Compare & Write: Not Supported 00:29:28.996 Scatter-Gather List 00:29:28.996 SGL Command Set: Supported 00:29:28.996 SGL Keyed: Not Supported 00:29:28.996 SGL Bit Bucket Descriptor: Not Supported 00:29:28.996 SGL Metadata Pointer: Not Supported 00:29:28.996 Oversized SGL: Not Supported 00:29:28.996 SGL Metadata Address: Not Supported 00:29:28.996 SGL Offset: Supported 00:29:28.996 Transport SGL Data Block: Not Supported 00:29:28.996 Replay Protected Memory Block: Not Supported 00:29:28.996 00:29:28.996 Firmware Slot Information 00:29:28.996 ========================= 00:29:28.996 Active slot: 0 00:29:28.996 00:29:28.996 Asymmetric Namespace Access 00:29:28.996 =========================== 00:29:28.996 Change Count : 0 00:29:28.996 Number of ANA Group Descriptors : 1 00:29:28.996 ANA Group Descriptor : 0 00:29:28.996 ANA Group ID : 1 00:29:28.996 Number of NSID Values : 1 00:29:28.996 Change Count : 0 00:29:28.996 ANA State : 1 00:29:28.996 Namespace Identifier : 1 00:29:28.996 00:29:28.996 Commands Supported and Effects 00:29:28.996 ============================== 00:29:28.996 Admin Commands 00:29:28.996 -------------- 00:29:28.996 Get Log Page (02h): Supported 00:29:28.996 Identify (06h): Supported 00:29:28.996 Abort (08h): Supported 00:29:28.996 Set Features (09h): Supported 00:29:28.996 Get Features (0Ah): Supported 00:29:28.996 Asynchronous Event Request (0Ch): Supported 00:29:28.996 Keep Alive (18h): Supported 00:29:28.996 I/O Commands 00:29:28.996 ------------ 00:29:28.996 Flush (00h): Supported 00:29:28.996 Write (01h): Supported LBA-Change 00:29:28.996 Read (02h): Supported 00:29:28.996 Write Zeroes (08h): Supported LBA-Change 00:29:28.996 Dataset Management (09h): Supported 00:29:28.996 00:29:28.996 Error Log 00:29:28.996 ========= 00:29:28.996 Entry: 0 00:29:28.996 Error Count: 0x3 00:29:28.996 Submission Queue Id: 0x0 00:29:28.996 Command Id: 0x5 00:29:28.996 Phase Bit: 0 00:29:28.996 Status Code: 0x2 00:29:28.996 Status Code Type: 0x0 00:29:28.996 Do Not Retry: 1 00:29:28.996 Error Location: 0x28 00:29:28.996 LBA: 0x0 00:29:28.996 Namespace: 0x0 00:29:28.996 Vendor Log Page: 0x0 00:29:28.996 ----------- 00:29:28.996 Entry: 1 00:29:28.996 Error Count: 0x2 00:29:28.996 Submission Queue Id: 0x0 00:29:28.996 Command Id: 0x5 00:29:28.996 Phase Bit: 0 00:29:28.996 Status Code: 0x2 00:29:28.996 Status Code Type: 0x0 00:29:28.996 Do Not Retry: 1 00:29:28.996 Error Location: 0x28 00:29:28.996 LBA: 0x0 00:29:28.996 Namespace: 0x0 00:29:28.996 Vendor Log Page: 0x0 00:29:28.996 ----------- 00:29:28.996 Entry: 2 00:29:28.996 Error Count: 0x1 00:29:28.997 Submission Queue Id: 0x0 00:29:28.997 Command Id: 0x4 00:29:28.997 Phase Bit: 0 00:29:28.997 Status Code: 0x2 00:29:28.997 Status Code Type: 0x0 00:29:28.997 Do Not Retry: 1 00:29:28.997 Error Location: 0x28 00:29:28.997 LBA: 0x0 00:29:28.997 Namespace: 0x0 00:29:28.997 Vendor Log Page: 0x0 00:29:28.997 00:29:28.997 Number of Queues 00:29:28.997 ================ 00:29:28.997 Number of I/O Submission Queues: 128 00:29:28.997 Number of I/O Completion Queues: 128 00:29:28.997 00:29:28.997 ZNS Specific Controller Data 00:29:28.997 ============================ 00:29:28.997 Zone Append Size Limit: 0 00:29:28.997 00:29:28.997 00:29:28.997 Active Namespaces 00:29:28.997 ================= 00:29:28.997 get_feature(0x05) failed 00:29:28.997 Namespace ID:1 00:29:28.997 Command Set Identifier: NVM (00h) 00:29:28.997 Deallocate: Supported 00:29:28.997 Deallocated/Unwritten Error: Not Supported 00:29:28.997 Deallocated Read Value: Unknown 00:29:28.997 Deallocate in Write Zeroes: Not Supported 00:29:28.997 Deallocated Guard Field: 0xFFFF 00:29:28.997 Flush: Supported 00:29:28.997 Reservation: Not Supported 00:29:28.997 Namespace Sharing Capabilities: Multiple Controllers 00:29:28.997 Size (in LBAs): 3750748848 (1788GiB) 00:29:28.997 Capacity (in LBAs): 3750748848 (1788GiB) 00:29:28.997 Utilization (in LBAs): 3750748848 (1788GiB) 00:29:28.997 UUID: 6ada2ec8-2daf-472b-963a-a1839582bef0 00:29:28.997 Thin Provisioning: Not Supported 00:29:28.997 Per-NS Atomic Units: Yes 00:29:28.997 Atomic Write Unit (Normal): 8 00:29:28.997 Atomic Write Unit (PFail): 8 00:29:28.997 Preferred Write Granularity: 8 00:29:28.997 Atomic Compare & Write Unit: 8 00:29:28.997 Atomic Boundary Size (Normal): 0 00:29:28.997 Atomic Boundary Size (PFail): 0 00:29:28.997 Atomic Boundary Offset: 0 00:29:28.997 NGUID/EUI64 Never Reused: No 00:29:28.997 ANA group ID: 1 00:29:28.997 Namespace Write Protected: No 00:29:28.997 Number of LBA Formats: 1 00:29:28.997 Current LBA Format: LBA Format #00 00:29:28.997 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:28.997 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.997 rmmod nvme_tcp 00:29:28.997 rmmod nvme_fabrics 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.997 17:45:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:31.536 17:45:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:34.848 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:34.848 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:35.109 00:29:35.109 real 0m19.595s 00:29:35.109 user 0m5.374s 00:29:35.109 sys 0m11.238s 00:29:35.109 17:45:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.109 17:45:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.109 ************************************ 00:29:35.109 END TEST nvmf_identify_kernel_target 00:29:35.109 ************************************ 00:29:35.109 17:45:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:35.109 17:45:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.109 17:45:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.109 17:45:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.370 ************************************ 00:29:35.370 START TEST nvmf_auth_host 00:29:35.370 ************************************ 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:35.370 * Looking for test storage... 00:29:35.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.370 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.371 --rc genhtml_branch_coverage=1 00:29:35.371 --rc genhtml_function_coverage=1 00:29:35.371 --rc genhtml_legend=1 00:29:35.371 --rc geninfo_all_blocks=1 00:29:35.371 --rc geninfo_unexecuted_blocks=1 00:29:35.371 00:29:35.371 ' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.371 --rc genhtml_branch_coverage=1 00:29:35.371 --rc genhtml_function_coverage=1 00:29:35.371 --rc genhtml_legend=1 00:29:35.371 --rc geninfo_all_blocks=1 00:29:35.371 --rc geninfo_unexecuted_blocks=1 00:29:35.371 00:29:35.371 ' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.371 --rc genhtml_branch_coverage=1 00:29:35.371 --rc genhtml_function_coverage=1 00:29:35.371 --rc genhtml_legend=1 00:29:35.371 --rc geninfo_all_blocks=1 00:29:35.371 --rc geninfo_unexecuted_blocks=1 00:29:35.371 00:29:35.371 ' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:35.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.371 --rc genhtml_branch_coverage=1 00:29:35.371 --rc genhtml_function_coverage=1 00:29:35.371 --rc genhtml_legend=1 00:29:35.371 --rc geninfo_all_blocks=1 00:29:35.371 --rc geninfo_unexecuted_blocks=1 00:29:35.371 00:29:35.371 ' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:35.371 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.372 17:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:43.507 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:43.507 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:43.507 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:43.507 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:29:43.507 00:29:43.507 --- 10.0.0.2 ping statistics --- 00:29:43.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.507 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:29:43.507 00:29:43.507 --- 10.0.0.1 ping statistics --- 00:29:43.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.507 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1723029 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1723029 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1723029 ']' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.507 17:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81096491d44d620b693f20b1990c4967 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.x87 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81096491d44d620b693f20b1990c4967 0 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81096491d44d620b693f20b1990c4967 0 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81096491d44d620b693f20b1990c4967 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:43.768 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.x87 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.x87 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x87 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=970e579483206c6ff36d08225be39ba5c944f2e775db602dcb0756687f30685f 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Nza 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 970e579483206c6ff36d08225be39ba5c944f2e775db602dcb0756687f30685f 3 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 970e579483206c6ff36d08225be39ba5c944f2e775db602dcb0756687f30685f 3 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=970e579483206c6ff36d08225be39ba5c944f2e775db602dcb0756687f30685f 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Nza 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Nza 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Nza 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c447f2c3613612d215c4a8703156dc6bfe997505fe7edc3 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Vxc 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c447f2c3613612d215c4a8703156dc6bfe997505fe7edc3 0 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c447f2c3613612d215c4a8703156dc6bfe997505fe7edc3 0 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c447f2c3613612d215c4a8703156dc6bfe997505fe7edc3 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:44.029 17:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Vxc 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Vxc 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Vxc 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b748325e372bb4291772fa4574d13a55caffa5284717f6a0 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nJY 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b748325e372bb4291772fa4574d13a55caffa5284717f6a0 2 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b748325e372bb4291772fa4574d13a55caffa5284717f6a0 2 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b748325e372bb4291772fa4574d13a55caffa5284717f6a0 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nJY 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nJY 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nJY 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=154a69f401d202c058c05dd912eb6d21 00:29:44.029 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3NF 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 154a69f401d202c058c05dd912eb6d21 1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 154a69f401d202c058c05dd912eb6d21 1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=154a69f401d202c058c05dd912eb6d21 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3NF 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3NF 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3NF 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2cb31bf58dcdf7ad82ca1add93e0be8 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ALs 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2cb31bf58dcdf7ad82ca1add93e0be8 1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2cb31bf58dcdf7ad82ca1add93e0be8 1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2cb31bf58dcdf7ad82ca1add93e0be8 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ALs 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ALs 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ALs 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:44.290 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d29dd26704f6107a3932a9f46d42b873b779b1497f384be0 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6rv 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d29dd26704f6107a3932a9f46d42b873b779b1497f384be0 2 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d29dd26704f6107a3932a9f46d42b873b779b1497f384be0 2 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d29dd26704f6107a3932a9f46d42b873b779b1497f384be0 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6rv 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6rv 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6rv 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=599854685cf4eea579a4f4c551e3d663 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.d2u 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 599854685cf4eea579a4f4c551e3d663 0 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 599854685cf4eea579a4f4c551e3d663 0 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=599854685cf4eea579a4f4c551e3d663 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.d2u 00:29:44.291 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.d2u 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.d2u 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8973e4bcda8d5fde3c8dd4ee54aad899b80cc8f4860d675e9e46ad22af255d2a 00:29:44.551 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UFe 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8973e4bcda8d5fde3c8dd4ee54aad899b80cc8f4860d675e9e46ad22af255d2a 3 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8973e4bcda8d5fde3c8dd4ee54aad899b80cc8f4860d675e9e46ad22af255d2a 3 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8973e4bcda8d5fde3c8dd4ee54aad899b80cc8f4860d675e9e46ad22af255d2a 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UFe 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UFe 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UFe 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1723029 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1723029 ']' 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x87 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.552 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Nza ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nza 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Vxc 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nJY ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nJY 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3NF 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ALs ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ALs 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6rv 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.d2u ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.d2u 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UFe 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:44.813 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:44.814 17:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:48.137 Waiting for block devices as requested 00:29:48.137 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:48.137 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:48.397 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:48.397 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:48.397 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:48.656 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:48.656 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:48.656 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:48.916 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:48.916 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:49.176 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:49.176 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:49.176 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:49.176 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:49.436 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:49.436 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:49.436 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:50.376 No valid GPT data, bailing 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:50.376 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:50.637 00:29:50.637 Discovery Log Number of Records 2, Generation counter 2 00:29:50.637 =====Discovery Log Entry 0====== 00:29:50.637 trtype: tcp 00:29:50.637 adrfam: ipv4 00:29:50.637 subtype: current discovery subsystem 00:29:50.637 treq: not specified, sq flow control disable supported 00:29:50.637 portid: 1 00:29:50.637 trsvcid: 4420 00:29:50.637 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:50.637 traddr: 10.0.0.1 00:29:50.637 eflags: none 00:29:50.637 sectype: none 00:29:50.637 =====Discovery Log Entry 1====== 00:29:50.637 trtype: tcp 00:29:50.637 adrfam: ipv4 00:29:50.637 subtype: nvme subsystem 00:29:50.637 treq: not specified, sq flow control disable supported 00:29:50.637 portid: 1 00:29:50.637 trsvcid: 4420 00:29:50.637 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:50.637 traddr: 10.0.0.1 00:29:50.637 eflags: none 00:29:50.637 sectype: none 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.637 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.638 nvme0n1 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:50.638 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.898 nvme0n1 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.898 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.899 17:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.159 nvme0n1 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.159 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.420 nvme0n1 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.420 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.421 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.681 nvme0n1 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:51.681 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.682 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 nvme0n1 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.942 17:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.203 nvme0n1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.203 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.465 nvme0n1 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.465 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.725 nvme0n1 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.725 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.985 nvme0n1 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.985 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.986 17:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.246 nvme0n1 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.246 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.505 nvme0n1 00:29:53.505 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.505 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.506 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.766 nvme0n1 00:29:53.766 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.766 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.766 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.766 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.766 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.766 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.026 17:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.286 nvme0n1 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:54.286 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.287 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.548 nvme0n1 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.548 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.809 nvme0n1 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.809 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:55.069 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.070 17:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.330 nvme0n1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.330 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.900 nvme0n1 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:55.900 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.901 17:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.489 nvme0n1 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:56.489 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.490 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.750 nvme0n1 00:29:56.750 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.750 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.750 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.750 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.750 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.750 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.010 17:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.269 nvme0n1 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.269 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.270 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.529 17:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.100 nvme0n1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.100 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.669 nvme0n1 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.930 17:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.506 nvme0n1 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.506 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.507 17:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.160 nvme0n1 00:30:00.160 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.160 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.160 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.160 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.160 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.160 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.463 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.043 nvme0n1 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.043 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.044 17:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.303 nvme0n1 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:01.303 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.304 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.563 nvme0n1 00:30:01.563 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.563 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.563 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.563 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.563 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.564 nvme0n1 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.564 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.824 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.825 nvme0n1 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.825 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.087 17:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.087 nvme0n1 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.087 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.348 nvme0n1 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.348 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.349 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.609 nvme0n1 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.609 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.870 nvme0n1 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.870 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.134 17:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 nvme0n1 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.134 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.393 nvme0n1 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:03.393 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.652 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.911 nvme0n1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.911 17:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.171 nvme0n1 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.171 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.430 nvme0n1 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.430 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.690 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.690 nvme0n1 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.949 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.950 17:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.208 nvme0n1 00:30:05.208 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.208 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.208 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.208 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.208 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.208 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.209 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.777 nvme0n1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.777 17:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.037 nvme0n1 00:30:06.037 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.297 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.298 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.298 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.298 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.558 nvme0n1 00:30:06.558 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.558 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.558 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.558 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.558 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.558 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.819 17:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.080 nvme0n1 00:30:07.080 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.080 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.080 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.080 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.080 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.080 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.340 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.601 nvme0n1 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:07.601 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.861 17:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.433 nvme0n1 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.433 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:08.434 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.434 17:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.004 nvme0n1 00:30:09.004 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.004 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.004 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.004 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.004 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.004 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.264 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.833 nvme0n1 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.833 17:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.771 nvme0n1 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.771 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.772 17:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.341 nvme0n1 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.341 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 nvme0n1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.602 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 nvme0n1 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 nvme0n1 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.862 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:12.122 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.123 17:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 nvme0n1 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.123 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.383 nvme0n1 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.383 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.384 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.644 nvme0n1 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.644 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.904 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.904 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.905 nvme0n1 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.905 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.166 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.166 nvme0n1 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.166 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.426 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.426 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.426 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.426 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.426 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.426 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.427 nvme0n1 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.427 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.686 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.687 nvme0n1 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:13.687 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.947 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.208 nvme0n1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.208 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.475 nvme0n1 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.475 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.736 nvme0n1 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.736 17:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.996 nvme0n1 00:30:14.996 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.996 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.996 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.996 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.996 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.996 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.255 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.256 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.516 nvme0n1 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.516 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.085 nvme0n1 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:16.085 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.086 17:46:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.345 nvme0n1 00:30:16.345 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.345 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.345 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.345 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.345 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.345 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.605 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.865 nvme0n1 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.865 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.125 17:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.385 nvme0n1 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.385 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.645 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.906 nvme0n1 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTY0OTFkNDRkNjIwYjY5M2YyMGIxOTkwYzQ5Njc+szFX: 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwZTU3OTQ4MzIwNmM2ZmYzNmQwODIyNWJlMzliYTVjOTQ0ZjJlNzc1ZGI2MDJkY2IwNzU2Njg3ZjMwNjg1Zmqr4aA=: 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.906 17:46:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.845 nvme0n1 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:18.845 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.846 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.416 nvme0n1 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:19.416 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.417 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.986 nvme0n1 00:30:19.986 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.986 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.986 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.986 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.986 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI5ZGQyNjcwNGY2MTA3YTM5MzJhOWY0NmQ0MmI4NzNiNzc5YjE0OTdmMzg0YmUwJZt84A==: 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTk5ODU0Njg1Y2Y0ZWVhNTc5YTRmNGM1NTFlM2Q2NjOfBff+: 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.246 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.818 nvme0n1 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3M2U0YmNkYThkNWZkZTNjOGRkNGVlNTRhYWQ4OTliODBjYzhmNDg2MGQ2NzVlOWU0NmFkMjJhZjI1NWQyYaDwxIs=: 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.818 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.758 nvme0n1 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.758 request: 00:30:21.758 { 00:30:21.758 "name": "nvme0", 00:30:21.758 "trtype": "tcp", 00:30:21.758 "traddr": "10.0.0.1", 00:30:21.758 "adrfam": "ipv4", 00:30:21.758 "trsvcid": "4420", 00:30:21.758 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:21.758 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:21.758 "prchk_reftag": false, 00:30:21.758 "prchk_guard": false, 00:30:21.758 "hdgst": false, 00:30:21.758 "ddgst": false, 00:30:21.758 "allow_unrecognized_csi": false, 00:30:21.758 "method": "bdev_nvme_attach_controller", 00:30:21.758 "req_id": 1 00:30:21.758 } 00:30:21.758 Got JSON-RPC error response 00:30:21.758 response: 00:30:21.758 { 00:30:21.758 "code": -5, 00:30:21.758 "message": "Input/output error" 00:30:21.758 } 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.758 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.759 request: 00:30:21.759 { 00:30:21.759 "name": "nvme0", 00:30:21.759 "trtype": "tcp", 00:30:21.759 "traddr": "10.0.0.1", 00:30:21.759 "adrfam": "ipv4", 00:30:21.759 "trsvcid": "4420", 00:30:21.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:21.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:21.759 "prchk_reftag": false, 00:30:21.759 "prchk_guard": false, 00:30:21.759 "hdgst": false, 00:30:21.759 "ddgst": false, 00:30:21.759 "dhchap_key": "key2", 00:30:21.759 "allow_unrecognized_csi": false, 00:30:21.759 "method": "bdev_nvme_attach_controller", 00:30:21.759 "req_id": 1 00:30:21.759 } 00:30:21.759 Got JSON-RPC error response 00:30:21.759 response: 00:30:21.759 { 00:30:21.759 "code": -5, 00:30:21.759 "message": "Input/output error" 00:30:21.759 } 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.759 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.019 request: 00:30:22.019 { 00:30:22.019 "name": "nvme0", 00:30:22.019 "trtype": "tcp", 00:30:22.019 "traddr": "10.0.0.1", 00:30:22.019 "adrfam": "ipv4", 00:30:22.019 "trsvcid": "4420", 00:30:22.019 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:22.019 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:22.019 "prchk_reftag": false, 00:30:22.019 "prchk_guard": false, 00:30:22.019 "hdgst": false, 00:30:22.019 "ddgst": false, 00:30:22.019 "dhchap_key": "key1", 00:30:22.019 "dhchap_ctrlr_key": "ckey2", 00:30:22.019 "allow_unrecognized_csi": false, 00:30:22.019 "method": "bdev_nvme_attach_controller", 00:30:22.019 "req_id": 1 00:30:22.019 } 00:30:22.019 Got JSON-RPC error response 00:30:22.019 response: 00:30:22.019 { 00:30:22.019 "code": -5, 00:30:22.019 "message": "Input/output error" 00:30:22.019 } 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.019 17:46:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.019 nvme0n1 00:30:22.019 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.019 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:22.019 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.019 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:22.019 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.020 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.280 request: 00:30:22.280 { 00:30:22.280 "name": "nvme0", 00:30:22.280 "dhchap_key": "key1", 00:30:22.280 "dhchap_ctrlr_key": "ckey2", 00:30:22.280 "method": "bdev_nvme_set_keys", 00:30:22.280 "req_id": 1 00:30:22.280 } 00:30:22.280 Got JSON-RPC error response 00:30:22.280 response: 00:30:22.280 { 00:30:22.280 "code": -13, 00:30:22.280 "message": "Permission denied" 00:30:22.280 } 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:22.280 17:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:23.219 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.219 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:23.219 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.219 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.219 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.478 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:23.478 17:46:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:24.418 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM0NDdmMmMzNjEzNjEyZDIxNWM0YTg3MDMxNTZkYzZiZmU5OTc1MDVmZTdlZGMzkoO56w==: 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: ]] 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjc0ODMyNWUzNzJiYjQyOTE3NzJmYTQ1NzRkMTNhNTVjYWZmYTUyODQ3MTdmNmEw7TSyjw==: 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.419 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.679 nvme0n1 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTU0YTY5ZjQwMWQyMDJjMDU4YzA1ZGQ5MTJlYjZkMjGdnS+j: 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: ]] 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjJjYjMxYmY1OGRjZGY3YWQ4MmNhMWFkZDkzZTBiZTin+M1z: 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.679 request: 00:30:24.679 { 00:30:24.679 "name": "nvme0", 00:30:24.679 "dhchap_key": "key2", 00:30:24.679 "dhchap_ctrlr_key": "ckey1", 00:30:24.679 "method": "bdev_nvme_set_keys", 00:30:24.679 "req_id": 1 00:30:24.679 } 00:30:24.679 Got JSON-RPC error response 00:30:24.679 response: 00:30:24.679 { 00:30:24.679 "code": -13, 00:30:24.679 "message": "Permission denied" 00:30:24.679 } 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:24.679 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.680 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.680 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.680 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:24.680 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:25.619 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.619 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:25.619 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.619 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.619 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.879 rmmod nvme_tcp 00:30:25.879 rmmod nvme_fabrics 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1723029 ']' 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1723029 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1723029 ']' 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1723029 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:25.879 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723029 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723029' 00:30:25.880 killing process with pid 1723029 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1723029 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1723029 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.880 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:28.423 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:28.423 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:28.423 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:28.423 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:31.724 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:31.724 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:31.984 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x87 /tmp/spdk.key-null.Vxc /tmp/spdk.key-sha256.3NF /tmp/spdk.key-sha384.6rv /tmp/spdk.key-sha512.UFe /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:32.244 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:35.611 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:35.611 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:35.611 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:35.872 00:30:35.872 real 1m0.655s 00:30:35.872 user 0m54.611s 00:30:35.872 sys 0m15.874s 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.872 ************************************ 00:30:35.872 END TEST nvmf_auth_host 00:30:35.872 ************************************ 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.872 17:46:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.873 ************************************ 00:30:35.873 START TEST nvmf_digest 00:30:35.873 ************************************ 00:30:35.873 17:46:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:36.135 * Looking for test storage... 00:30:36.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.135 --rc genhtml_branch_coverage=1 00:30:36.135 --rc genhtml_function_coverage=1 00:30:36.135 --rc genhtml_legend=1 00:30:36.135 --rc geninfo_all_blocks=1 00:30:36.135 --rc geninfo_unexecuted_blocks=1 00:30:36.135 00:30:36.135 ' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.135 --rc genhtml_branch_coverage=1 00:30:36.135 --rc genhtml_function_coverage=1 00:30:36.135 --rc genhtml_legend=1 00:30:36.135 --rc geninfo_all_blocks=1 00:30:36.135 --rc geninfo_unexecuted_blocks=1 00:30:36.135 00:30:36.135 ' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.135 --rc genhtml_branch_coverage=1 00:30:36.135 --rc genhtml_function_coverage=1 00:30:36.135 --rc genhtml_legend=1 00:30:36.135 --rc geninfo_all_blocks=1 00:30:36.135 --rc geninfo_unexecuted_blocks=1 00:30:36.135 00:30:36.135 ' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.135 --rc genhtml_branch_coverage=1 00:30:36.135 --rc genhtml_function_coverage=1 00:30:36.135 --rc genhtml_legend=1 00:30:36.135 --rc geninfo_all_blocks=1 00:30:36.135 --rc geninfo_unexecuted_blocks=1 00:30:36.135 00:30:36.135 ' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.135 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.136 17:46:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:44.282 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:44.282 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:44.282 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:44.282 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.282 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:30:44.283 00:30:44.283 --- 10.0.0.2 ping statistics --- 00:30:44.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.283 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:44.283 00:30:44.283 --- 10.0.0.1 ping statistics --- 00:30:44.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.283 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:44.283 ************************************ 00:30:44.283 START TEST nvmf_digest_clean 00:30:44.283 ************************************ 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1729300 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1729300 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1729300 ']' 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.283 17:46:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.283 [2024-12-06 17:46:35.676078] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:30:44.283 [2024-12-06 17:46:35.676138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.283 [2024-12-06 17:46:35.778744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.283 [2024-12-06 17:46:35.828850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.283 [2024-12-06 17:46:35.828903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.283 [2024-12-06 17:46:35.828911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.283 [2024-12-06 17:46:35.828918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.283 [2024-12-06 17:46:35.828925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.283 [2024-12-06 17:46:35.829665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.545 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.805 null0 00:30:44.805 [2024-12-06 17:46:36.633813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.805 [2024-12-06 17:46:36.658111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1729333 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1729333 /var/tmp/bperf.sock 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1729333 ']' 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.805 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.806 17:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.806 [2024-12-06 17:46:36.717713] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:30:44.806 [2024-12-06 17:46:36.717778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729333 ] 00:30:44.806 [2024-12-06 17:46:36.809507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.806 [2024-12-06 17:46:36.864141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.746 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.746 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:45.746 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:45.746 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:45.746 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:46.005 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:46.005 17:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:46.005 nvme0n1 00:30:46.266 17:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:46.266 17:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.266 Running I/O for 2 seconds... 00:30:48.159 18852.00 IOPS, 73.64 MiB/s [2024-12-06T16:46:40.225Z] 19410.00 IOPS, 75.82 MiB/s 00:30:48.160 Latency(us) 00:30:48.160 [2024-12-06T16:46:40.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.160 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:48.160 nvme0n1 : 2.00 19432.32 75.91 0.00 0.00 6579.10 3140.27 20425.39 00:30:48.160 [2024-12-06T16:46:40.226Z] =================================================================================================================== 00:30:48.160 [2024-12-06T16:46:40.226Z] Total : 19432.32 75.91 0.00 0.00 6579.10 3140.27 20425.39 00:30:48.160 { 00:30:48.160 "results": [ 00:30:48.160 { 00:30:48.160 "job": "nvme0n1", 00:30:48.160 "core_mask": "0x2", 00:30:48.160 "workload": "randread", 00:30:48.160 "status": "finished", 00:30:48.160 "queue_depth": 128, 00:30:48.160 "io_size": 4096, 00:30:48.160 "runtime": 2.00429, 00:30:48.160 "iops": 19432.317678579446, 00:30:48.160 "mibps": 75.90749093195096, 00:30:48.160 "io_failed": 0, 00:30:48.160 "io_timeout": 0, 00:30:48.160 "avg_latency_us": 6579.103773236109, 00:30:48.160 "min_latency_us": 3140.266666666667, 00:30:48.160 "max_latency_us": 20425.386666666665 00:30:48.160 } 00:30:48.160 ], 00:30:48.160 "core_count": 1 00:30:48.160 } 00:30:48.160 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:48.160 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:48.160 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:48.160 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:48.160 | select(.opcode=="crc32c") 00:30:48.160 | "\(.module_name) \(.executed)"' 00:30:48.160 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1729333 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1729333 ']' 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1729333 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729333 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729333' 00:30:48.421 killing process with pid 1729333 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1729333 00:30:48.421 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.421 00:30:48.421 Latency(us) 00:30:48.421 [2024-12-06T16:46:40.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.421 [2024-12-06T16:46:40.487Z] =================================================================================================================== 00:30:48.421 [2024-12-06T16:46:40.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.421 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1729333 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1729386 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1729386 /var/tmp/bperf.sock 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1729386 ']' 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:48.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.682 17:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:48.682 [2024-12-06 17:46:40.627233] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:30:48.682 [2024-12-06 17:46:40.627287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729386 ] 00:30:48.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:48.682 Zero copy mechanism will not be used. 00:30:48.682 [2024-12-06 17:46:40.712293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.682 [2024-12-06 17:46:40.740608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.623 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.883 nvme0n1 00:30:49.884 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:49.884 17:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:50.144 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:50.144 Zero copy mechanism will not be used. 00:30:50.144 Running I/O for 2 seconds... 00:30:52.028 4031.00 IOPS, 503.88 MiB/s [2024-12-06T16:46:44.094Z] 3608.00 IOPS, 451.00 MiB/s 00:30:52.028 Latency(us) 00:30:52.028 [2024-12-06T16:46:44.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.028 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:52.028 nvme0n1 : 2.04 3538.66 442.33 0.00 0.00 4432.51 771.41 45656.75 00:30:52.028 [2024-12-06T16:46:44.094Z] =================================================================================================================== 00:30:52.028 [2024-12-06T16:46:44.094Z] Total : 3538.66 442.33 0.00 0.00 4432.51 771.41 45656.75 00:30:52.028 { 00:30:52.028 "results": [ 00:30:52.028 { 00:30:52.028 "job": "nvme0n1", 00:30:52.028 "core_mask": "0x2", 00:30:52.028 "workload": "randread", 00:30:52.028 "status": "finished", 00:30:52.028 "queue_depth": 16, 00:30:52.028 "io_size": 131072, 00:30:52.028 "runtime": 2.043713, 00:30:52.028 "iops": 3538.657335937091, 00:30:52.028 "mibps": 442.33216699213637, 00:30:52.028 "io_failed": 0, 00:30:52.028 "io_timeout": 0, 00:30:52.028 "avg_latency_us": 4432.513982300885, 00:30:52.028 "min_latency_us": 771.4133333333333, 00:30:52.028 "max_latency_us": 45656.746666666666 00:30:52.028 } 00:30:52.028 ], 00:30:52.028 "core_count": 1 00:30:52.028 } 00:30:52.029 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:52.029 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:52.029 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:52.029 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:52.029 | select(.opcode=="crc32c") 00:30:52.029 | "\(.module_name) \(.executed)"' 00:30:52.029 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1729386 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1729386 ']' 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1729386 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729386 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729386' 00:30:52.290 killing process with pid 1729386 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1729386 00:30:52.290 Received shutdown signal, test time was about 2.000000 seconds 00:30:52.290 00:30:52.290 Latency(us) 00:30:52.290 [2024-12-06T16:46:44.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.290 [2024-12-06T16:46:44.356Z] =================================================================================================================== 00:30:52.290 [2024-12-06T16:46:44.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.290 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1729386 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1729451 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1729451 /var/tmp/bperf.sock 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1729451 ']' 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.550 17:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.550 [2024-12-06 17:46:44.500829] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:30:52.550 [2024-12-06 17:46:44.500889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729451 ] 00:30:52.550 [2024-12-06 17:46:44.581003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.550 [2024-12-06 17:46:44.609969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:53.490 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:54.059 nvme0n1 00:30:54.059 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:54.059 17:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:54.059 Running I/O for 2 seconds... 00:30:55.937 30308.00 IOPS, 118.39 MiB/s [2024-12-06T16:46:48.003Z] 29805.00 IOPS, 116.43 MiB/s 00:30:55.937 Latency(us) 00:30:55.937 [2024-12-06T16:46:48.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.937 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.937 nvme0n1 : 2.00 29803.23 116.42 0.00 0.00 4288.21 2143.57 9011.20 00:30:55.937 [2024-12-06T16:46:48.003Z] =================================================================================================================== 00:30:55.937 [2024-12-06T16:46:48.003Z] Total : 29803.23 116.42 0.00 0.00 4288.21 2143.57 9011.20 00:30:55.937 { 00:30:55.937 "results": [ 00:30:55.937 { 00:30:55.937 "job": "nvme0n1", 00:30:55.937 "core_mask": "0x2", 00:30:55.937 "workload": "randwrite", 00:30:55.937 "status": "finished", 00:30:55.937 "queue_depth": 128, 00:30:55.937 "io_size": 4096, 00:30:55.937 "runtime": 2.004145, 00:30:55.937 "iops": 29803.232800021953, 00:30:55.937 "mibps": 116.41887812508575, 00:30:55.937 "io_failed": 0, 00:30:55.937 "io_timeout": 0, 00:30:55.937 "avg_latency_us": 4288.2088973715045, 00:30:55.937 "min_latency_us": 2143.5733333333333, 00:30:55.937 "max_latency_us": 9011.2 00:30:55.937 } 00:30:55.937 ], 00:30:55.937 "core_count": 1 00:30:55.937 } 00:30:55.937 17:46:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:55.937 17:46:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:55.937 17:46:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:55.937 17:46:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:55.937 | select(.opcode=="crc32c") 00:30:55.937 | "\(.module_name) \(.executed)"' 00:30:55.937 17:46:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1729451 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1729451 ']' 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1729451 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729451 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729451' 00:30:56.197 killing process with pid 1729451 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1729451 00:30:56.197 Received shutdown signal, test time was about 2.000000 seconds 00:30:56.197 00:30:56.197 Latency(us) 00:30:56.197 [2024-12-06T16:46:48.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.197 [2024-12-06T16:46:48.263Z] =================================================================================================================== 00:30:56.197 [2024-12-06T16:46:48.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.197 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1729451 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1729508 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1729508 /var/tmp/bperf.sock 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1729508 ']' 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:56.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:56.457 17:46:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:56.457 [2024-12-06 17:46:48.383944] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:30:56.457 [2024-12-06 17:46:48.383997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729508 ] 00:30:56.457 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:56.457 Zero copy mechanism will not be used. 00:30:56.457 [2024-12-06 17:46:48.468995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.457 [2024-12-06 17:46:48.497335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:57.396 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:57.657 nvme0n1 00:30:57.657 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:57.657 17:46:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:57.916 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:57.916 Zero copy mechanism will not be used. 00:30:57.916 Running I/O for 2 seconds... 00:30:59.849 4946.00 IOPS, 618.25 MiB/s [2024-12-06T16:46:51.915Z] 5244.50 IOPS, 655.56 MiB/s 00:30:59.849 Latency(us) 00:30:59.849 [2024-12-06T16:46:51.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.849 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:59.849 nvme0n1 : 2.00 5244.91 655.61 0.00 0.00 3046.86 1167.36 6662.83 00:30:59.849 [2024-12-06T16:46:51.915Z] =================================================================================================================== 00:30:59.849 [2024-12-06T16:46:51.915Z] Total : 5244.91 655.61 0.00 0.00 3046.86 1167.36 6662.83 00:30:59.849 { 00:30:59.849 "results": [ 00:30:59.849 { 00:30:59.849 "job": "nvme0n1", 00:30:59.849 "core_mask": "0x2", 00:30:59.849 "workload": "randwrite", 00:30:59.849 "status": "finished", 00:30:59.849 "queue_depth": 16, 00:30:59.849 "io_size": 131072, 00:30:59.849 "runtime": 2.003465, 00:30:59.849 "iops": 5244.91318790196, 00:30:59.849 "mibps": 655.614148487745, 00:30:59.849 "io_failed": 0, 00:30:59.849 "io_timeout": 0, 00:30:59.849 "avg_latency_us": 3046.856388783149, 00:30:59.849 "min_latency_us": 1167.36, 00:30:59.849 "max_latency_us": 6662.826666666667 00:30:59.849 } 00:30:59.849 ], 00:30:59.849 "core_count": 1 00:30:59.849 } 00:30:59.849 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:59.849 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:59.849 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:59.849 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:59.849 | select(.opcode=="crc32c") 00:30:59.849 | "\(.module_name) \(.executed)"' 00:30:59.849 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1729508 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1729508 ']' 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1729508 00:31:00.108 17:46:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729508 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729508' 00:31:00.108 killing process with pid 1729508 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1729508 00:31:00.108 Received shutdown signal, test time was about 2.000000 seconds 00:31:00.108 00:31:00.108 Latency(us) 00:31:00.108 [2024-12-06T16:46:52.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.108 [2024-12-06T16:46:52.174Z] =================================================================================================================== 00:31:00.108 [2024-12-06T16:46:52.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1729508 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1729300 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1729300 ']' 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1729300 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:00.108 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729300 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729300' 00:31:00.368 killing process with pid 1729300 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1729300 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1729300 00:31:00.368 00:31:00.368 real 0m16.734s 00:31:00.368 user 0m33.223s 00:31:00.368 sys 0m3.705s 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:00.368 ************************************ 00:31:00.368 END TEST nvmf_digest_clean 00:31:00.368 ************************************ 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:00.368 ************************************ 00:31:00.368 START TEST nvmf_digest_error 00:31:00.368 ************************************ 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1729596 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1729596 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1729596 ']' 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.368 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.369 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.369 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.369 17:46:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:00.627 [2024-12-06 17:46:52.477422] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:00.627 [2024-12-06 17:46:52.477482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.627 [2024-12-06 17:46:52.567827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.627 [2024-12-06 17:46:52.599452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.627 [2024-12-06 17:46:52.599482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.628 [2024-12-06 17:46:52.599488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.628 [2024-12-06 17:46:52.599493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.628 [2024-12-06 17:46:52.599497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.628 [2024-12-06 17:46:52.599974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.568 [2024-12-06 17:46:53.309910] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.568 null0 00:31:01.568 [2024-12-06 17:46:53.388661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.568 [2024-12-06 17:46:53.412861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1729629 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1729629 /var/tmp/bperf.sock 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1729629 ']' 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:01.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.568 17:46:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.568 [2024-12-06 17:46:53.469967] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:01.568 [2024-12-06 17:46:53.470017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729629 ] 00:31:01.568 [2024-12-06 17:46:53.553405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.568 [2024-12-06 17:46:53.582920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.508 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.767 nvme0n1 00:31:03.028 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:03.028 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.028 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:03.028 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.028 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:03.028 17:46:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.028 Running I/O for 2 seconds... 00:31:03.028 [2024-12-06 17:46:54.960360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:54.960394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:54.960411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:54.968936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:54.968956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:54.968964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:54.980467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:54.980486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:54.980493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:54.992784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:54.992803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:54.992810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.003271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.003289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.003296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.011243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.011260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.019854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.019872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.019879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.029538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.029556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.029562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.038315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.038332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.038339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.047008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.047029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.047036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.056505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.056522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.056528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.064820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.064838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.064845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.073817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.028 [2024-12-06 17:46:55.073835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.028 [2024-12-06 17:46:55.073841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.028 [2024-12-06 17:46:55.083141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.029 [2024-12-06 17:46:55.083159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.029 [2024-12-06 17:46:55.083165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.029 [2024-12-06 17:46:55.092567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.029 [2024-12-06 17:46:55.092584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.029 [2024-12-06 17:46:55.092591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.288 [2024-12-06 17:46:55.101261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.288 [2024-12-06 17:46:55.101278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.288 [2024-12-06 17:46:55.101285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.288 [2024-12-06 17:46:55.110818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.288 [2024-12-06 17:46:55.110835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.288 [2024-12-06 17:46:55.110841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.288 [2024-12-06 17:46:55.119649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.288 [2024-12-06 17:46:55.119666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.288 [2024-12-06 17:46:55.119673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.288 [2024-12-06 17:46:55.129157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.288 [2024-12-06 17:46:55.129173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.288 [2024-12-06 17:46:55.129180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.288 [2024-12-06 17:46:55.137404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.288 [2024-12-06 17:46:55.137421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.288 [2024-12-06 17:46:55.137427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.146278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.146295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.146301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.155562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.155579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.155586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.163428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.163445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.163452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.173108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.173132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.182484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.182501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.182508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.192116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.192133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.192140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.201371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.201389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.201398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.209177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.209195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.209201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.219190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.219207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.219213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.227445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.227462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.227468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.236068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.236085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.236092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.245069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.245086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.245093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.255005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.255022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.255029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.264389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.264405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.264412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.274170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.274188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.274195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.283410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.283428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.283434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.292354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.292372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.292378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.300404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.300421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.300428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.309472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.309489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.309496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.318156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.318173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.318180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.326852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.326869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.326876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.335383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.335400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.335407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.289 [2024-12-06 17:46:55.344938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.289 [2024-12-06 17:46:55.344955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.289 [2024-12-06 17:46:55.344961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.549 [2024-12-06 17:46:55.355184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.549 [2024-12-06 17:46:55.355201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.549 [2024-12-06 17:46:55.355211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.549 [2024-12-06 17:46:55.364507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.549 [2024-12-06 17:46:55.364524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.549 [2024-12-06 17:46:55.364531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.549 [2024-12-06 17:46:55.372042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.549 [2024-12-06 17:46:55.372059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.549 [2024-12-06 17:46:55.372066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.549 [2024-12-06 17:46:55.382380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.549 [2024-12-06 17:46:55.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.549 [2024-12-06 17:46:55.382404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.549 [2024-12-06 17:46:55.391929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.549 [2024-12-06 17:46:55.391946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.549 [2024-12-06 17:46:55.391952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.549 [2024-12-06 17:46:55.400318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.400336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.400342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.408664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.408681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.408687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.417746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.417763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.417769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.427245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.427263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.427269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.436839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.436859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.436865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.445593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.445610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.445616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.455227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.455244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.455251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.466498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.466515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.474664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.474689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.474695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.483633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.483654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.483661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.492832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.492848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.492855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.501342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.501359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.501366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.509907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.509924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.509931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.519514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.519531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.519538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.528283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.528300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.528307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.537542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.537559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.537566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.545862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.545879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.545886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.555119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.555136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.563257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.563274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.563281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.571942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.571959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.571966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.581276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.581293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.581299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.589655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.589673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.589683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.599577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.599593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.599600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.550 [2024-12-06 17:46:55.607774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.550 [2024-12-06 17:46:55.607791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.550 [2024-12-06 17:46:55.607798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.616616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.616633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.625182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.625198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.625205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.635445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.635463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.635469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.645408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.645425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.645432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.654981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.654998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.655004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.666093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.666110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.666116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.675580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.675597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.675603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.684128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.684145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.684152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.693316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.693333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.693340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.704344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.704361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.704367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.713464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.713481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.713487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.721098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.721116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.721122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.731009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.731027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.731034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.741083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.741100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.741106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.749599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.749615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.749625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.758005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.758022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.758028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.766542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.766559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.766565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.775286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.775303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.775309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.784348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.784365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.784371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.794094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.794110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.794117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.803398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.803415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.803422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.812440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.812458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.812464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.811 [2024-12-06 17:46:55.820542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.811 [2024-12-06 17:46:55.820558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.811 [2024-12-06 17:46:55.820565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.812 [2024-12-06 17:46:55.829937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.812 [2024-12-06 17:46:55.829957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.812 [2024-12-06 17:46:55.829964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.812 [2024-12-06 17:46:55.838689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.812 [2024-12-06 17:46:55.838706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.812 [2024-12-06 17:46:55.838712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.812 [2024-12-06 17:46:55.847279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.812 [2024-12-06 17:46:55.847296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.812 [2024-12-06 17:46:55.847303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.812 [2024-12-06 17:46:55.855255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.812 [2024-12-06 17:46:55.855272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.812 [2024-12-06 17:46:55.855279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.812 [2024-12-06 17:46:55.864985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.812 [2024-12-06 17:46:55.865001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.812 [2024-12-06 17:46:55.865008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.812 [2024-12-06 17:46:55.874447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:03.812 [2024-12-06 17:46:55.874464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.812 [2024-12-06 17:46:55.874470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.883970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.883987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.883994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.893880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.893897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.893904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.902559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.902577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.902583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.911707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.911724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.911730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.920114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.920131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.920137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.929873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.929889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.929895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 27507.00 IOPS, 107.45 MiB/s [2024-12-06T16:46:56.138Z] [2024-12-06 17:46:55.938753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.938770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.938776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.947025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.947042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.947048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.956221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.956238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.956244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.964465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.964482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.964488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.974131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.974148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.974155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.984063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.984090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:55.992501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:55.992517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:55.992524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.000548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.000565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.000571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.010274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.010292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.010298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.018878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.018895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.018901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.028258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.028275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.028281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.038189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.038206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.038212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.048187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.048204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.057386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.057403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.057410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.067218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.067235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.067241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.076255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.076273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.076279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.085586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.085603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.085610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.094071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.094087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.094094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.104675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.104692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.104698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.113661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.113677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.113684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.122220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.122237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.122244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.072 [2024-12-06 17:46:56.130972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.072 [2024-12-06 17:46:56.130990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.072 [2024-12-06 17:46:56.130996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.333 [2024-12-06 17:46:56.140669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.333 [2024-12-06 17:46:56.140687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.333 [2024-12-06 17:46:56.140700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.333 [2024-12-06 17:46:56.149423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.333 [2024-12-06 17:46:56.149440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.333 [2024-12-06 17:46:56.149447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.333 [2024-12-06 17:46:56.157996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.333 [2024-12-06 17:46:56.158013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.333 [2024-12-06 17:46:56.158019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.333 [2024-12-06 17:46:56.167055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.333 [2024-12-06 17:46:56.167072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.333 [2024-12-06 17:46:56.167078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.333 [2024-12-06 17:46:56.175598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.333 [2024-12-06 17:46:56.175615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.333 [2024-12-06 17:46:56.175621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.333 [2024-12-06 17:46:56.187989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.333 [2024-12-06 17:46:56.188006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.188013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.198776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.198793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.198799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.206491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.206507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.215971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.215988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.215994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.224626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.224649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.224656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.234992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.235008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.235015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.246451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.246468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.246475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.254198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.254216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.254223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.263025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.263043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.272787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.272803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.272810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.281077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.281094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.281100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.289738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.289755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.289762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.298099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.298116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.298122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.308003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.308021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.308027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.316324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.316341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.316348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.326694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.326710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.326717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.335824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.335841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.335847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.344825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.344842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.344849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.355162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.355179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.355185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.364904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.364920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.364927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.376137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.376158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.385156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.385173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.385183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.334 [2024-12-06 17:46:56.395461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.334 [2024-12-06 17:46:56.395478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.334 [2024-12-06 17:46:56.395485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.403027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.403045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.403051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.412234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.412252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.412259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.420893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.420910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.420917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.429998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.430015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.430022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.439018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.439035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.439042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.447686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.447703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.447709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.457287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.457303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.457309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.466840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.466857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.466864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.474899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.474916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.474922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.484266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.484283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.484290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.493196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.493213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.493220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.501327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.501344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.501351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.511793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.511810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.511817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.520433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.520450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.520457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.530709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.530725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.530732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.538655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.538671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.538681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.547770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.547787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.547794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.556768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.556785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.556791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.566156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.566172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.566179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.574438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.574454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.574461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.583068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.583084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.595 [2024-12-06 17:46:56.583091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.595 [2024-12-06 17:46:56.592382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.595 [2024-12-06 17:46:56.592399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.592406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.596 [2024-12-06 17:46:56.601274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.596 [2024-12-06 17:46:56.601290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.601296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.596 [2024-12-06 17:46:56.611139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.596 [2024-12-06 17:46:56.611156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.611163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.596 [2024-12-06 17:46:56.620215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.596 [2024-12-06 17:46:56.620235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.620241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.596 [2024-12-06 17:46:56.629269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.596 [2024-12-06 17:46:56.629287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.629293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.596 [2024-12-06 17:46:56.638614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.596 [2024-12-06 17:46:56.638630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.638641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.596 [2024-12-06 17:46:56.650606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.596 [2024-12-06 17:46:56.650623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.596 [2024-12-06 17:46:56.650630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.856 [2024-12-06 17:46:56.660721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.856 [2024-12-06 17:46:56.660739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.856 [2024-12-06 17:46:56.660746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.856 [2024-12-06 17:46:56.668641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.856 [2024-12-06 17:46:56.668658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.856 [2024-12-06 17:46:56.668664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.856 [2024-12-06 17:46:56.678698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.856 [2024-12-06 17:46:56.678715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.856 [2024-12-06 17:46:56.678722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.856 [2024-12-06 17:46:56.686397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.856 [2024-12-06 17:46:56.686415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.856 [2024-12-06 17:46:56.686422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.856 [2024-12-06 17:46:56.695718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.856 [2024-12-06 17:46:56.695735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.856 [2024-12-06 17:46:56.695741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.705073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.705090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.705097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.713509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.713526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.713533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.723238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.723262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.731428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.731445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.731452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.741184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.741201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.741208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.750569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.750586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.750592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.760214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.760232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.760238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.770344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.770361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.770368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.779065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.779082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.779091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.787924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.787940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.787947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.798725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.798742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.798749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.808117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.808134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.808141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.816951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.816968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.816974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.824548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.824565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.824572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.834462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.834479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.834486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.843398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.843416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.843422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.853066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.853083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.853090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.863571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.863595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.872825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.872843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.872849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.885162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.885180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.885186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.896442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.896459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.896465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.906905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.906923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.906929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.857 [2024-12-06 17:46:56.914943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:04.857 [2024-12-06 17:46:56.914961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.857 [2024-12-06 17:46:56.914967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.117 [2024-12-06 17:46:56.924151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:05.117 [2024-12-06 17:46:56.924168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.117 [2024-12-06 17:46:56.924175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.117 [2024-12-06 17:46:56.933379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:05.117 [2024-12-06 17:46:56.933395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.118 [2024-12-06 17:46:56.933402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.118 27503.00 IOPS, 107.43 MiB/s [2024-12-06T16:46:57.184Z] [2024-12-06 17:46:56.941541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2008d60) 00:31:05.118 [2024-12-06 17:46:56.941558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.118 [2024-12-06 17:46:56.941567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.118 00:31:05.118 Latency(us) 00:31:05.118 [2024-12-06T16:46:57.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.118 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:05.118 nvme0n1 : 2.00 27518.14 107.49 0.00 0.00 4646.12 2102.61 20643.84 00:31:05.118 [2024-12-06T16:46:57.184Z] =================================================================================================================== 00:31:05.118 [2024-12-06T16:46:57.184Z] Total : 27518.14 107.49 0.00 0.00 4646.12 2102.61 20643.84 00:31:05.118 { 00:31:05.118 "results": [ 00:31:05.118 { 00:31:05.118 "job": "nvme0n1", 00:31:05.118 "core_mask": "0x2", 00:31:05.118 "workload": "randread", 00:31:05.118 "status": "finished", 00:31:05.118 "queue_depth": 128, 00:31:05.118 "io_size": 4096, 00:31:05.118 "runtime": 2.003551, 00:31:05.118 "iops": 27518.14153969627, 00:31:05.118 "mibps": 107.49274038943855, 00:31:05.118 "io_failed": 0, 00:31:05.118 "io_timeout": 0, 00:31:05.118 "avg_latency_us": 4646.117970036637, 00:31:05.118 "min_latency_us": 2102.6133333333332, 00:31:05.118 "max_latency_us": 20643.84 00:31:05.118 } 00:31:05.118 ], 00:31:05.118 "core_count": 1 00:31:05.118 } 00:31:05.118 17:46:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:05.118 17:46:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:05.118 17:46:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:05.118 | .driver_specific 00:31:05.118 | .nvme_error 00:31:05.118 | .status_code 00:31:05.118 | .command_transient_transport_error' 00:31:05.118 17:46:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1729629 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1729629 ']' 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1729629 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.118 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729629 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729629' 00:31:05.378 killing process with pid 1729629 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1729629 00:31:05.378 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.378 00:31:05.378 Latency(us) 00:31:05.378 [2024-12-06T16:46:57.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.378 [2024-12-06T16:46:57.444Z] =================================================================================================================== 00:31:05.378 [2024-12-06T16:46:57.444Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1729629 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1729693 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1729693 /var/tmp/bperf.sock 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1729693 ']' 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.378 17:46:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.378 [2024-12-06 17:46:57.359097] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:05.378 [2024-12-06 17:46:57.359153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729693 ] 00:31:05.378 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:05.378 Zero copy mechanism will not be used. 00:31:05.379 [2024-12-06 17:46:57.442370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.639 [2024-12-06 17:46:57.471701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.209 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.209 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:06.209 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.209 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.469 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:06.469 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.469 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.469 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.469 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.469 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.728 nvme0n1 00:31:06.728 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:06.728 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.728 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.728 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.728 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:06.728 17:46:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:06.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:06.989 Zero copy mechanism will not be used. 00:31:06.989 Running I/O for 2 seconds... 00:31:06.989 [2024-12-06 17:46:58.817001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.817035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.817044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.827127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.827151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.827158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.836474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.836495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.836502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.845112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.845132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.845138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.855262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.855281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.855288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.865821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.865840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.865847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.872517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.872535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.872542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.880625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.880656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.880663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.891789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.891807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.891814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.899112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.899130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.899137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.903414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.903432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.903439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.907767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.907786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.907792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.917480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.917499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.917505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.926723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.926742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.926749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.938037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.938055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.938062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.948538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.948557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.948563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.960019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.960037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.960044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.969395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.969415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.989 [2024-12-06 17:46:58.969421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.989 [2024-12-06 17:46:58.978501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.989 [2024-12-06 17:46:58.978521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:58.978527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:58.987914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:58.987933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:58.987939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:58.992674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:58.992693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:58.992699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.001903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.001921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.001928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.013055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.013074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.013081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.023803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.023822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.023828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.029823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.029842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.029852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.034305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.034323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.034329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.039577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.039595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.039601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.043901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.043919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.043926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:06.990 [2024-12-06 17:46:59.050932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:06.990 [2024-12-06 17:46:59.050951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.990 [2024-12-06 17:46:59.050957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.060368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.060386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.060392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.065546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.065565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.065572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.072992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.073010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.073017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.082585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.082603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.092326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.092354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.102553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.102571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.102578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.114124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.114142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.114149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.125752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.125771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.125777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.135119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.135138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.135144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.147220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.147238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.147245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.159081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.159100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.159107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.170409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.170428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.170435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.181995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.182014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.182020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.191528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.191546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.191553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.200491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.200509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.200516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.209443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.209462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.209469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.220681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.220705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.228381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.228400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.228406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.237450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.237468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.237475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.248416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.248434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.248441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.258959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.258977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.258984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.269494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.269517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.269524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.249 [2024-12-06 17:46:59.281351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.249 [2024-12-06 17:46:59.281370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.249 [2024-12-06 17:46:59.281376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.250 [2024-12-06 17:46:59.291879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.250 [2024-12-06 17:46:59.291897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.250 [2024-12-06 17:46:59.291904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.250 [2024-12-06 17:46:59.302054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.250 [2024-12-06 17:46:59.302074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.250 [2024-12-06 17:46:59.302080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.250 [2024-12-06 17:46:59.310811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.250 [2024-12-06 17:46:59.310829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.250 [2024-12-06 17:46:59.310836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.316362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.316382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.316389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.323908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.323928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.323934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.333362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.333381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.333387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.339631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.339653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.339660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.349970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.349995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.358901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.358919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.358926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.369206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.369224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.369230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.380986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.381005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.381012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.392802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.392820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.392827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.404845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.404863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.404870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.416584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.416602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.416608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.428483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.509 [2024-12-06 17:46:59.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.509 [2024-12-06 17:46:59.428507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.509 [2024-12-06 17:46:59.441323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.441352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.453497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.453515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.453522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.464435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.464452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.464459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.476391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.476410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.476416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.488550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.488569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.488576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.501135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.501155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.501161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.512861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.512879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.512886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.525045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.525063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.525070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.537089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.537107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.537114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.546477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.546499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.546506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.556385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.556403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.556409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.510 [2024-12-06 17:46:59.564627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.510 [2024-12-06 17:46:59.564656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.510 [2024-12-06 17:46:59.564663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.575964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.575984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.575990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.583668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.583687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.583694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.593915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.593933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.593939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.604392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.604410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.604417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.614944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.614962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.614968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.624773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.624790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.624797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.632347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.632365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.632372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.641632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.641655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.641661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.648804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.648823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.648829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.658556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.658574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.658581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.670072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.670090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.670096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.678420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.678439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.678445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.688414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.688432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.688439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.698869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.698888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.698894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.707634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.707657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.707667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.716210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.716229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.716235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.719851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.719869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.719875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.727369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.727393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.739563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.739582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.739588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.748627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.748651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.748657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.760066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.760085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.760091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.768280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.768298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.768305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.780742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.780761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.780767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.792705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.770 [2024-12-06 17:46:59.792727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.770 [2024-12-06 17:46:59.792733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.770 [2024-12-06 17:46:59.804969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.771 [2024-12-06 17:46:59.804988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.771 [2024-12-06 17:46:59.804995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.771 3222.00 IOPS, 402.75 MiB/s [2024-12-06T16:46:59.837Z] [2024-12-06 17:46:59.817767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.771 [2024-12-06 17:46:59.817786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.771 [2024-12-06 17:46:59.817792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.771 [2024-12-06 17:46:59.830724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:07.771 [2024-12-06 17:46:59.830743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.771 [2024-12-06 17:46:59.830750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.842863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.842882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.842889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.854711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.854728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.854735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.867359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.867377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.867384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.879092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.879110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.879117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.891269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.891288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.891298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.903263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.903281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.903288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.914784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.914803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.032 [2024-12-06 17:46:59.914809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.032 [2024-12-06 17:46:59.926569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.032 [2024-12-06 17:46:59.926588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.926594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:46:59.937839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:46:59.937858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.937864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:46:59.947728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:46:59.947746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.947753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:46:59.958270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:46:59.958288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.958295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:46:59.968779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:46:59.968798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.968805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:46:59.978943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:46:59.978961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.978968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:46:59.989879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:46:59.989902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:46:59.989908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.001315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.001334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.010526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.010545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.010552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.016530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.016549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.016555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.023234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.023253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.023260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.031782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.031800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.031807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.039184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.039202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.039208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.046036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.046055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.046061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.050343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.050361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.050367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.054529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.054547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.054554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.058352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.058370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.058378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.062693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.062712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.062719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.070869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.070888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.070895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.076201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.076219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.076225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.082528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.082547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.082553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.090886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.090905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.090912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.033 [2024-12-06 17:47:00.095854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.033 [2024-12-06 17:47:00.095872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.033 [2024-12-06 17:47:00.095879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.102010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.102028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.102042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.108428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.108453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.115046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.115064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.115070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.123770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.123789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.123795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.133491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.133509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.133516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.142472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.142497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.148137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.148155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.148161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.152503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.152521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.152528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.157206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.157224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.157231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.161621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.161649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.161656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.168823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.168842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.168848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.177806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.177824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.177831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.183072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.188965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.188983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.188989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.193983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.194001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.194007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.201498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.201516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.201523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.207266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.207284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.207291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.213587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.213606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.213612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.221051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.221070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.221076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.226985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.227003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.232681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.232699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.232705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.241464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.241482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.241489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.251192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.251210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.251216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.257123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.257141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.257148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.265384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.265403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.265409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.295 [2024-12-06 17:47:00.272694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.295 [2024-12-06 17:47:00.272712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.295 [2024-12-06 17:47:00.272718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.280975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.280993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.281004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.289903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.289922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.289928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.297420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.297438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.297445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.303594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.303619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.315344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.315362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.315368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.327216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.327234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.327240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.338913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.338931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.338937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.296 [2024-12-06 17:47:00.350927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.296 [2024-12-06 17:47:00.350946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.296 [2024-12-06 17:47:00.350953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.557 [2024-12-06 17:47:00.360147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.557 [2024-12-06 17:47:00.360167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.557 [2024-12-06 17:47:00.360175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.557 [2024-12-06 17:47:00.369122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.557 [2024-12-06 17:47:00.369144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.557 [2024-12-06 17:47:00.369151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.557 [2024-12-06 17:47:00.378531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.557 [2024-12-06 17:47:00.378550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.378556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.388244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.388263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.388269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.397942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.397961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.397967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.406972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.406991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.413840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.413859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.413866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.421964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.421982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.421989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.430261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.430280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.430287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.435819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.435836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.435843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.442316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.442334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.442341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.450013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.450031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.450038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.458927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.458944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.458951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.469163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.469181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.469188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.479650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.479667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.479674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.490897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.490915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.490921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.501455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.501473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.501480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.512289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.512307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.512314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.517445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.517464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.517473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.525861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.525879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.525885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.535348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.535366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.535372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.545866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.558 [2024-12-06 17:47:00.545884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.558 [2024-12-06 17:47:00.545890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.558 [2024-12-06 17:47:00.555944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.555961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.555968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.564683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.564701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.564707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.572547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.572565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.572571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.581293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.581311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.581317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.586547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.586565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.586572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.593995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.594016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.594023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.603820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.603845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.610800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.610819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.610825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.559 [2024-12-06 17:47:00.619487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.559 [2024-12-06 17:47:00.619506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.559 [2024-12-06 17:47:00.619513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.626415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.626433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.626440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.636587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.636606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.636613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.646131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.646149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.646156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.652036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.652054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.652061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.661662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.661680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.661690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.671461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.671479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.671486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.679934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.679952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.679959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.687085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.687103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.687109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.694486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.694505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.694511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.703879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.703897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.703903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.713995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.714013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.714020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.721312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.721330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.721336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.728967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.728985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.821 [2024-12-06 17:47:00.728992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.821 [2024-12-06 17:47:00.736603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.821 [2024-12-06 17:47:00.736625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.736631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.744062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.744080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.744086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.749629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.749651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.749658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.757134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.757152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.757159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.762984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.763003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.763009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.771309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.771327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.771333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.780375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.780393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.780399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.786695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.786713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.786720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.791418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.791436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.791442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.801900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.801919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.801925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.822 [2024-12-06 17:47:00.814373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5c78c0) 00:31:08.822 [2024-12-06 17:47:00.814391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.822 [2024-12-06 17:47:00.814398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.822 3474.50 IOPS, 434.31 MiB/s 00:31:08.822 Latency(us) 00:31:08.822 [2024-12-06T16:47:00.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:08.822 nvme0n1 : 2.01 3473.21 434.15 0.00 0.00 4604.41 727.04 13380.27 00:31:08.822 [2024-12-06T16:47:00.888Z] =================================================================================================================== 00:31:08.822 [2024-12-06T16:47:00.888Z] Total : 3473.21 434.15 0.00 0.00 4604.41 727.04 13380.27 00:31:08.822 { 00:31:08.822 "results": [ 00:31:08.822 { 00:31:08.822 "job": "nvme0n1", 00:31:08.822 "core_mask": "0x2", 00:31:08.822 "workload": "randread", 00:31:08.822 "status": "finished", 00:31:08.822 "queue_depth": 16, 00:31:08.822 "io_size": 131072, 00:31:08.822 "runtime": 2.005348, 00:31:08.822 "iops": 3473.212629428907, 00:31:08.822 "mibps": 434.15157867861336, 00:31:08.822 "io_failed": 0, 00:31:08.822 "io_timeout": 0, 00:31:08.822 "avg_latency_us": 4604.414897343862, 00:31:08.822 "min_latency_us": 727.04, 00:31:08.822 "max_latency_us": 13380.266666666666 00:31:08.822 } 00:31:08.822 ], 00:31:08.822 "core_count": 1 00:31:08.822 } 00:31:08.822 17:47:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:08.822 17:47:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:08.822 17:47:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:08.822 | .driver_specific 00:31:08.822 | .nvme_error 00:31:08.822 | .status_code 00:31:08.822 | .command_transient_transport_error' 00:31:08.822 17:47:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1729693 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1729693 ']' 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1729693 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729693 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729693' 00:31:09.084 killing process with pid 1729693 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1729693 00:31:09.084 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.084 00:31:09.084 Latency(us) 00:31:09.084 [2024-12-06T16:47:01.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.084 [2024-12-06T16:47:01.150Z] =================================================================================================================== 00:31:09.084 [2024-12-06T16:47:01.150Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.084 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1729693 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1729744 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1729744 /var/tmp/bperf.sock 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1729744 ']' 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.345 17:47:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:09.345 [2024-12-06 17:47:01.233265] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:09.345 [2024-12-06 17:47:01.233318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729744 ] 00:31:09.345 [2024-12-06 17:47:01.315736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.345 [2024-12-06 17:47:01.345295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.285 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.545 nvme0n1 00:31:10.545 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:10.545 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.545 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:10.545 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.545 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:10.545 17:47:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:10.545 Running I/O for 2 seconds... 00:31:10.545 [2024-12-06 17:47:02.583134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef5be8 00:31:10.545 [2024-12-06 17:47:02.584155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.545 [2024-12-06 17:47:02.584183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:10.545 [2024-12-06 17:47:02.592012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.545 [2024-12-06 17:47:02.593005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.545 [2024-12-06 17:47:02.593025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:10.545 [2024-12-06 17:47:02.600690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.545 [2024-12-06 17:47:02.601677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.545 [2024-12-06 17:47:02.601695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.545 [2024-12-06 17:47:02.609163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.610184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.617642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.618632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.618651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.626085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.627090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.627106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.634528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.635515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.635532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.642973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.643983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.643999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.651440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.652449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.652465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.659895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.660893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.660910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.668319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.669320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.669336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.676738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.677701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.677717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.685144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.686135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.686151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.693561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.694562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.694577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.701976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.702938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.702957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.710397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.711384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.711400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.718814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.719801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.719817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.727217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:10.807 [2024-12-06 17:47:02.728203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.728219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.735897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6fa8 00:31:10.807 [2024-12-06 17:47:02.736635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.736655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.744582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eff3c8 00:31:10.807 [2024-12-06 17:47:02.745683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.745698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.752951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef46d0 00:31:10.807 [2024-12-06 17:47:02.754042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.754058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.761370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef35f0 00:31:10.807 [2024-12-06 17:47:02.762456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.762471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.769774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef2510 00:31:10.807 [2024-12-06 17:47:02.770882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.770898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.778186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef1430 00:31:10.807 [2024-12-06 17:47:02.779271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.779289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.786625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efda78 00:31:10.807 [2024-12-06 17:47:02.787713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.787729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.795048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4140 00:31:10.807 [2024-12-06 17:47:02.796158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.796173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.807 [2024-12-06 17:47:02.803466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee3060 00:31:10.807 [2024-12-06 17:47:02.804564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.807 [2024-12-06 17:47:02.804580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.811899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee1f80 00:31:10.808 [2024-12-06 17:47:02.813004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.813020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.820314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ede8a8 00:31:10.808 [2024-12-06 17:47:02.821419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.821435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.828722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee7c50 00:31:10.808 [2024-12-06 17:47:02.829819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.829835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.837135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee8d30 00:31:10.808 [2024-12-06 17:47:02.838249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.838265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.845545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee9e10 00:31:10.808 [2024-12-06 17:47:02.846650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.846665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.853972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeaef0 00:31:10.808 [2024-12-06 17:47:02.855063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.855078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.862370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efa3a0 00:31:10.808 [2024-12-06 17:47:02.863461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:10.808 [2024-12-06 17:47:02.863477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:10.808 [2024-12-06 17:47:02.870772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb480 00:31:11.069 [2024-12-06 17:47:02.871861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.871877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.879220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc560 00:31:11.069 [2024-12-06 17:47:02.880323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.880339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.887643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efef90 00:31:11.069 [2024-12-06 17:47:02.888702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.888717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.896057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef3a28 00:31:11.069 [2024-12-06 17:47:02.897159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.897174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.904470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef2948 00:31:11.069 [2024-12-06 17:47:02.905572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.905588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.912877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef1868 00:31:11.069 [2024-12-06 17:47:02.913964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.913980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.921292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efe720 00:31:11.069 [2024-12-06 17:47:02.922359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.922375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.929701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efcdd0 00:31:11.069 [2024-12-06 17:47:02.930801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.930817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.938128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee3498 00:31:11.069 [2024-12-06 17:47:02.939233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.939249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.946541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee23b8 00:31:11.069 [2024-12-06 17:47:02.947601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.947616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.954966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8e88 00:31:11.069 [2024-12-06 17:47:02.956070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.956085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.069 [2024-12-06 17:47:02.963383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eddc00 00:31:11.069 [2024-12-06 17:47:02.964466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.069 [2024-12-06 17:47:02.964481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:02.971807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee88f8 00:31:11.070 [2024-12-06 17:47:02.972908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:02.972924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:02.980223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee99d8 00:31:11.070 [2024-12-06 17:47:02.981182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:02.981197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:02.988653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeaab8 00:31:11.070 [2024-12-06 17:47:02.989750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:02.989765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:02.997060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef9f68 00:31:11.070 [2024-12-06 17:47:02.998151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:02.998169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.004758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0350 00:31:11.070 [2024-12-06 17:47:03.006188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.006203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.012552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef57b0 00:31:11.070 [2024-12-06 17:47:03.013295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.013310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.021150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eecc78 00:31:11.070 [2024-12-06 17:47:03.021899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.021915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.029582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eedd58 00:31:11.070 [2024-12-06 17:47:03.030354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.030369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.038021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeee38 00:31:11.070 [2024-12-06 17:47:03.038766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.038782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.046415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeff18 00:31:11.070 [2024-12-06 17:47:03.047126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.047142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.054844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:11.070 [2024-12-06 17:47:03.055610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.055626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.063276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4578 00:31:11.070 [2024-12-06 17:47:03.064032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.064048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.071693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5658 00:31:11.070 [2024-12-06 17:47:03.072440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.072456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.080115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6738 00:31:11.070 [2024-12-06 17:47:03.080867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.080882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.088535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef5be8 00:31:11.070 [2024-12-06 17:47:03.089300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.089316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.096950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6cc8 00:31:11.070 [2024-12-06 17:47:03.097694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.097710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.105357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7da8 00:31:11.070 [2024-12-06 17:47:03.106108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.106124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.113774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edece0 00:31:11.070 [2024-12-06 17:47:03.114541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.114557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.122342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efbcf0 00:31:11.070 [2024-12-06 17:47:03.123070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.123086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.070 [2024-12-06 17:47:03.130809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef4b08 00:31:11.070 [2024-12-06 17:47:03.131561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.070 [2024-12-06 17:47:03.131577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.139215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6fa8 00:31:11.331 [2024-12-06 17:47:03.139980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.139996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.147611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0a68 00:31:11.331 [2024-12-06 17:47:03.148374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.148390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.156046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edf988 00:31:11.331 [2024-12-06 17:47:03.156805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.156821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.164479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eed0b0 00:31:11.331 [2024-12-06 17:47:03.165229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.172893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eee190 00:31:11.331 [2024-12-06 17:47:03.173645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.173661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.181284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eef270 00:31:11.331 [2024-12-06 17:47:03.182023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.182039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.189706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0ff8 00:31:11.331 [2024-12-06 17:47:03.190453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.190469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.198118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb760 00:31:11.331 [2024-12-06 17:47:03.198889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.198905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.206526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5220 00:31:11.331 [2024-12-06 17:47:03.207295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.207311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.214949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6300 00:31:11.331 [2024-12-06 17:47:03.215713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.215732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.223351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef57b0 00:31:11.331 [2024-12-06 17:47:03.224114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.224130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.231747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6890 00:31:11.331 [2024-12-06 17:47:03.232489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.232505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.240154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7970 00:31:11.331 [2024-12-06 17:47:03.240895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.240911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.248578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8a50 00:31:11.331 [2024-12-06 17:47:03.249342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.249358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.257024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc998 00:31:11.331 [2024-12-06 17:47:03.257737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.257753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.265441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb8b8 00:31:11.331 [2024-12-06 17:47:03.266208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.331 [2024-12-06 17:47:03.266224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.331 [2024-12-06 17:47:03.273858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee73e0 00:31:11.332 [2024-12-06 17:47:03.274619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.274634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.282275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0ea0 00:31:11.332 [2024-12-06 17:47:03.283047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.283063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.290712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edfdc0 00:31:11.332 [2024-12-06 17:47:03.291469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.291484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.299128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eecc78 00:31:11.332 [2024-12-06 17:47:03.299838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.299854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.307538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eedd58 00:31:11.332 [2024-12-06 17:47:03.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.308320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.315954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeee38 00:31:11.332 [2024-12-06 17:47:03.316708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.316724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.324358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeff18 00:31:11.332 [2024-12-06 17:47:03.325127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.325143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.332759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec408 00:31:11.332 [2024-12-06 17:47:03.333508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.333523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.341200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4578 00:31:11.332 [2024-12-06 17:47:03.341930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.341946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.349634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5658 00:31:11.332 [2024-12-06 17:47:03.350388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.350403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.358067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6738 00:31:11.332 [2024-12-06 17:47:03.358815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.358830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.366485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef5be8 00:31:11.332 [2024-12-06 17:47:03.367240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.367256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.374923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6cc8 00:31:11.332 [2024-12-06 17:47:03.375672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.375688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.383345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7da8 00:31:11.332 [2024-12-06 17:47:03.384086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.384102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.332 [2024-12-06 17:47:03.391796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edece0 00:31:11.332 [2024-12-06 17:47:03.392510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.332 [2024-12-06 17:47:03.392525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.400216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efbcf0 00:31:11.593 [2024-12-06 17:47:03.400933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.400949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.408620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef4b08 00:31:11.593 [2024-12-06 17:47:03.409385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.409400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.417029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6fa8 00:31:11.593 [2024-12-06 17:47:03.417744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.417760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.425445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0a68 00:31:11.593 [2024-12-06 17:47:03.426195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.426211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.433863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edf988 00:31:11.593 [2024-12-06 17:47:03.434606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.434624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.442282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eed0b0 00:31:11.593 [2024-12-06 17:47:03.443050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.443066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.450720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eee190 00:31:11.593 [2024-12-06 17:47:03.451483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.451499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.459137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eef270 00:31:11.593 [2024-12-06 17:47:03.459912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.459928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.467552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0ff8 00:31:11.593 [2024-12-06 17:47:03.468321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.468337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.475994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb760 00:31:11.593 [2024-12-06 17:47:03.476742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.476757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.484522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5220 00:31:11.593 [2024-12-06 17:47:03.485270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.485285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.492955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6300 00:31:11.593 [2024-12-06 17:47:03.493697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.493713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.501364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef57b0 00:31:11.593 [2024-12-06 17:47:03.502125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.502141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.509776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6890 00:31:11.593 [2024-12-06 17:47:03.510545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.593 [2024-12-06 17:47:03.510560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.593 [2024-12-06 17:47:03.518188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7970 00:31:11.593 [2024-12-06 17:47:03.518939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.518954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.526609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8a50 00:31:11.594 [2024-12-06 17:47:03.527373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.527389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.535044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc998 00:31:11.594 [2024-12-06 17:47:03.535796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.543458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb8b8 00:31:11.594 [2024-12-06 17:47:03.544222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.544238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.551869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee73e0 00:31:11.594 [2024-12-06 17:47:03.552631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.552648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.560267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0ea0 00:31:11.594 [2024-12-06 17:47:03.561023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.561039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.568703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edfdc0 00:31:11.594 [2024-12-06 17:47:03.569467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.569482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.577088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eecc78 00:31:11.594 30210.00 IOPS, 118.01 MiB/s [2024-12-06T16:47:03.660Z] [2024-12-06 17:47:03.577842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.577856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.585591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eedd58 00:31:11.594 [2024-12-06 17:47:03.586340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.586355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.594009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeee38 00:31:11.594 [2024-12-06 17:47:03.594734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.594750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.602423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeff18 00:31:11.594 [2024-12-06 17:47:03.603180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.603196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.610839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eebb98 00:31:11.594 [2024-12-06 17:47:03.611583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.611599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.619255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4de8 00:31:11.594 [2024-12-06 17:47:03.619961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.627667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5ec8 00:31:11.594 [2024-12-06 17:47:03.628425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.628440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.636079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb328 00:31:11.594 [2024-12-06 17:47:03.636798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.644517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6458 00:31:11.594 [2024-12-06 17:47:03.645264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.645280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.594 [2024-12-06 17:47:03.652915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7538 00:31:11.594 [2024-12-06 17:47:03.653673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.594 [2024-12-06 17:47:03.653692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.661337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8618 00:31:11.856 [2024-12-06 17:47:03.662088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.662103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.669758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc560 00:31:11.856 [2024-12-06 17:47:03.670499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.670514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.678188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb480 00:31:11.856 [2024-12-06 17:47:03.678904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.678920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.686633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee7818 00:31:11.856 [2024-12-06 17:47:03.687381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.687397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.695043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee12d8 00:31:11.856 [2024-12-06 17:47:03.695798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.695814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.703454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee01f8 00:31:11.856 [2024-12-06 17:47:03.704221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.704237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.711888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec840 00:31:11.856 [2024-12-06 17:47:03.712645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.712660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.720313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eed920 00:31:11.856 [2024-12-06 17:47:03.721062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.721078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.728740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeea00 00:31:11.856 [2024-12-06 17:47:03.729496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.729511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.737147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eefae0 00:31:11.856 [2024-12-06 17:47:03.737894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.737909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.745546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0bc0 00:31:11.856 [2024-12-06 17:47:03.746283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.753968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb760 00:31:11.856 [2024-12-06 17:47:03.754709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.754724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.762380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5220 00:31:11.856 [2024-12-06 17:47:03.763126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.763141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.770808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6300 00:31:11.856 [2024-12-06 17:47:03.771554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.856 [2024-12-06 17:47:03.771569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.856 [2024-12-06 17:47:03.779212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef57b0 00:31:11.856 [2024-12-06 17:47:03.779941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.779957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.787624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6890 00:31:11.857 [2024-12-06 17:47:03.788369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.788385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.796054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7970 00:31:11.857 [2024-12-06 17:47:03.796810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.796825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.804484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8a50 00:31:11.857 [2024-12-06 17:47:03.805225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.805241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.812907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc998 00:31:11.857 [2024-12-06 17:47:03.813660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.813675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.821315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb8b8 00:31:11.857 [2024-12-06 17:47:03.822053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.822069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.829717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee73e0 00:31:11.857 [2024-12-06 17:47:03.830469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.830484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.838111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0ea0 00:31:11.857 [2024-12-06 17:47:03.838858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.838874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.846527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edfdc0 00:31:11.857 [2024-12-06 17:47:03.847253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.847269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.854944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eecc78 00:31:11.857 [2024-12-06 17:47:03.855689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.855704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.863371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eedd58 00:31:11.857 [2024-12-06 17:47:03.864143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.864159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.871777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeee38 00:31:11.857 [2024-12-06 17:47:03.872513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.872531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.880169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeff18 00:31:11.857 [2024-12-06 17:47:03.880917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.880932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.888610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eebb98 00:31:11.857 [2024-12-06 17:47:03.889350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.889366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.897020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4de8 00:31:11.857 [2024-12-06 17:47:03.897779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.897795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.905451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5ec8 00:31:11.857 [2024-12-06 17:47:03.906205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.906221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:11.857 [2024-12-06 17:47:03.913855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb328 00:31:11.857 [2024-12-06 17:47:03.914597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:11.857 [2024-12-06 17:47:03.914612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.922268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6458 00:31:12.118 [2024-12-06 17:47:03.923028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.923044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.930686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7538 00:31:12.118 [2024-12-06 17:47:03.931419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.931435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.939124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8618 00:31:12.118 [2024-12-06 17:47:03.939868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.939883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.947565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc560 00:31:12.118 [2024-12-06 17:47:03.948316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.948332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.955986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb480 00:31:12.118 [2024-12-06 17:47:03.956698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.956714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.964383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee7818 00:31:12.118 [2024-12-06 17:47:03.965137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.965152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.972777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee12d8 00:31:12.118 [2024-12-06 17:47:03.973517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.118 [2024-12-06 17:47:03.973533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.118 [2024-12-06 17:47:03.981193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee01f8 00:31:12.119 [2024-12-06 17:47:03.981918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:03.981934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:03.989609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec840 00:31:12.119 [2024-12-06 17:47:03.990369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:03.990385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:03.998060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eed920 00:31:12.119 [2024-12-06 17:47:03.998821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:03.998837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.006493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeea00 00:31:12.119 [2024-12-06 17:47:04.007238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.007254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.014906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eefae0 00:31:12.119 [2024-12-06 17:47:04.015647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.023308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0bc0 00:31:12.119 [2024-12-06 17:47:04.024055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.024070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.031728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb760 00:31:12.119 [2024-12-06 17:47:04.032480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.032495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.040158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5220 00:31:12.119 [2024-12-06 17:47:04.040911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.040927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.048574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6300 00:31:12.119 [2024-12-06 17:47:04.049319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.049335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.056978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef57b0 00:31:12.119 [2024-12-06 17:47:04.057730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.057745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.065375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6890 00:31:12.119 [2024-12-06 17:47:04.066116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.066132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.073799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7970 00:31:12.119 [2024-12-06 17:47:04.074546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.074561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.082220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8a50 00:31:12.119 [2024-12-06 17:47:04.082973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.082989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.090663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc998 00:31:12.119 [2024-12-06 17:47:04.091412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.091433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.099082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb8b8 00:31:12.119 [2024-12-06 17:47:04.099792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.099808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.107485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee73e0 00:31:12.119 [2024-12-06 17:47:04.108244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.108260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.115912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0ea0 00:31:12.119 [2024-12-06 17:47:04.116651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.116666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.124472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edfdc0 00:31:12.119 [2024-12-06 17:47:04.125232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.125248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.132891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eecc78 00:31:12.119 [2024-12-06 17:47:04.133630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.133648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.141294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eedd58 00:31:12.119 [2024-12-06 17:47:04.142039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.142055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.149697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeee38 00:31:12.119 [2024-12-06 17:47:04.150450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.119 [2024-12-06 17:47:04.150465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.119 [2024-12-06 17:47:04.158122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeff18 00:31:12.120 [2024-12-06 17:47:04.158882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.120 [2024-12-06 17:47:04.158898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.120 [2024-12-06 17:47:04.166541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eebb98 00:31:12.120 [2024-12-06 17:47:04.167251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.120 [2024-12-06 17:47:04.167267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.120 [2024-12-06 17:47:04.174961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4de8 00:31:12.120 [2024-12-06 17:47:04.175722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.120 [2024-12-06 17:47:04.175738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.382 [2024-12-06 17:47:04.183401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5ec8 00:31:12.382 [2024-12-06 17:47:04.184131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.382 [2024-12-06 17:47:04.184147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.382 [2024-12-06 17:47:04.191798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb328 00:31:12.382 [2024-12-06 17:47:04.192549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.382 [2024-12-06 17:47:04.192565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.382 [2024-12-06 17:47:04.200193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6458 00:31:12.382 [2024-12-06 17:47:04.200906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.382 [2024-12-06 17:47:04.200922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.382 [2024-12-06 17:47:04.208624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7538 00:31:12.382 [2024-12-06 17:47:04.209386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.382 [2024-12-06 17:47:04.209401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.382 [2024-12-06 17:47:04.217058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8618 00:31:12.383 [2024-12-06 17:47:04.217775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.217791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.225475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc560 00:31:12.383 [2024-12-06 17:47:04.226219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.226235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.233900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb480 00:31:12.383 [2024-12-06 17:47:04.234641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.234656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.242314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee7818 00:31:12.383 [2024-12-06 17:47:04.243071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.243087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.250723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee12d8 00:31:12.383 [2024-12-06 17:47:04.251466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.251481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.259151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee01f8 00:31:12.383 [2024-12-06 17:47:04.259905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.259921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.267573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec840 00:31:12.383 [2024-12-06 17:47:04.268331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.268347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.276003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eed920 00:31:12.383 [2024-12-06 17:47:04.276717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.276732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.284400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeea00 00:31:12.383 [2024-12-06 17:47:04.285152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.285168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.292807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eefae0 00:31:12.383 [2024-12-06 17:47:04.293563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.293579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.301218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0bc0 00:31:12.383 [2024-12-06 17:47:04.301974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.301990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.309654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb760 00:31:12.383 [2024-12-06 17:47:04.310356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.310375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.318085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5220 00:31:12.383 [2024-12-06 17:47:04.318799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.318815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.326499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee6300 00:31:12.383 [2024-12-06 17:47:04.327227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.327242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.334898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef57b0 00:31:12.383 [2024-12-06 17:47:04.335641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.335657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.343304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6890 00:31:12.383 [2024-12-06 17:47:04.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.344079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.351739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7970 00:31:12.383 [2024-12-06 17:47:04.352500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.360166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8a50 00:31:12.383 [2024-12-06 17:47:04.360920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.360936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.368581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc998 00:31:12.383 [2024-12-06 17:47:04.369322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.369337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.376984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb8b8 00:31:12.383 [2024-12-06 17:47:04.377703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.377719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.385385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee73e0 00:31:12.383 [2024-12-06 17:47:04.386125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.386141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.393816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee0ea0 00:31:12.383 [2024-12-06 17:47:04.394517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.394532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.402245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016edfdc0 00:31:12.383 [2024-12-06 17:47:04.403010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.403025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.410683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eecc78 00:31:12.383 [2024-12-06 17:47:04.411419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.411434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.419105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eedd58 00:31:12.383 [2024-12-06 17:47:04.419846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.419861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.427533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeee38 00:31:12.383 [2024-12-06 17:47:04.428274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.428290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.435966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeff18 00:31:12.383 [2024-12-06 17:47:04.436720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.383 [2024-12-06 17:47:04.436736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.383 [2024-12-06 17:47:04.444402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eebb98 00:31:12.384 [2024-12-06 17:47:04.445141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.384 [2024-12-06 17:47:04.445157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.452827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee4de8 00:31:12.644 [2024-12-06 17:47:04.453585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.453601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.461242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee5ec8 00:31:12.644 [2024-12-06 17:47:04.461975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.461990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.469647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeb328 00:31:12.644 [2024-12-06 17:47:04.470381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.470397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.478061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef6458 00:31:12.644 [2024-12-06 17:47:04.478814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.478829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.486511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef7538 00:31:12.644 [2024-12-06 17:47:04.487276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.487292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.495000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef8618 00:31:12.644 [2024-12-06 17:47:04.495736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.495752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.503426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efc560 00:31:12.644 [2024-12-06 17:47:04.504188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.504204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.511842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016efb480 00:31:12.644 [2024-12-06 17:47:04.512586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.520261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee7818 00:31:12.644 [2024-12-06 17:47:04.521020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.644 [2024-12-06 17:47:04.521035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.644 [2024-12-06 17:47:04.528685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee12d8 00:31:12.644 [2024-12-06 17:47:04.529433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.529451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 [2024-12-06 17:47:04.537107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ee01f8 00:31:12.645 [2024-12-06 17:47:04.537862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.537878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 [2024-12-06 17:47:04.545563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eec840 00:31:12.645 [2024-12-06 17:47:04.546307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.546323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 [2024-12-06 17:47:04.553989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eed920 00:31:12.645 [2024-12-06 17:47:04.554738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.554754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 [2024-12-06 17:47:04.562406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eeea00 00:31:12.645 [2024-12-06 17:47:04.563164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 [2024-12-06 17:47:04.570825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016eefae0 00:31:12.645 [2024-12-06 17:47:04.571560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.571576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 [2024-12-06 17:47:04.579223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825eb0) with pdu=0x200016ef0bc0 00:31:12.645 30294.00 IOPS, 118.34 MiB/s [2024-12-06T16:47:04.711Z] [2024-12-06 17:47:04.580207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.645 [2024-12-06 17:47:04.580221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:12.645 00:31:12.645 Latency(us) 00:31:12.645 [2024-12-06T16:47:04.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.645 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.645 nvme0n1 : 2.00 30303.42 118.37 0.00 0.00 4218.73 2034.35 10103.47 00:31:12.645 [2024-12-06T16:47:04.711Z] =================================================================================================================== 00:31:12.645 [2024-12-06T16:47:04.711Z] Total : 30303.42 118.37 0.00 0.00 4218.73 2034.35 10103.47 00:31:12.645 { 00:31:12.645 "results": [ 00:31:12.645 { 00:31:12.645 "job": "nvme0n1", 00:31:12.645 "core_mask": "0x2", 00:31:12.645 "workload": "randwrite", 00:31:12.645 "status": "finished", 00:31:12.645 "queue_depth": 128, 00:31:12.645 "io_size": 4096, 00:31:12.645 "runtime": 2.004559, 00:31:12.645 "iops": 30303.423346481693, 00:31:12.645 "mibps": 118.37274744719412, 00:31:12.645 "io_failed": 0, 00:31:12.645 "io_timeout": 0, 00:31:12.645 "avg_latency_us": 4218.725024720828, 00:31:12.645 "min_latency_us": 2034.3466666666666, 00:31:12.645 "max_latency_us": 10103.466666666667 00:31:12.645 } 00:31:12.645 ], 00:31:12.645 "core_count": 1 00:31:12.645 } 00:31:12.645 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:12.645 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:12.645 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:12.645 | .driver_specific 00:31:12.645 | .nvme_error 00:31:12.645 | .status_code 00:31:12.645 | .command_transient_transport_error' 00:31:12.645 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1729744 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1729744 ']' 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1729744 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729744 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:12.905 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729744' 00:31:12.905 killing process with pid 1729744 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1729744 00:31:12.906 Received shutdown signal, test time was about 2.000000 seconds 00:31:12.906 00:31:12.906 Latency(us) 00:31:12.906 [2024-12-06T16:47:04.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.906 [2024-12-06T16:47:04.972Z] =================================================================================================================== 00:31:12.906 [2024-12-06T16:47:04.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1729744 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1729809 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1729809 /var/tmp/bperf.sock 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1729809 ']' 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:12.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.906 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.165 [2024-12-06 17:47:05.009344] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:13.165 [2024-12-06 17:47:05.009403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729809 ] 00:31:13.165 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:13.165 Zero copy mechanism will not be used. 00:31:13.165 [2024-12-06 17:47:05.091707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.165 [2024-12-06 17:47:05.121165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.733 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.733 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:13.733 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:13.733 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:13.992 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:13.992 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.992 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.992 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.992 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:13.992 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:14.561 nvme0n1 00:31:14.561 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:14.561 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.561 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.561 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.561 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:14.561 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:14.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:14.561 Zero copy mechanism will not be used. 00:31:14.561 Running I/O for 2 seconds... 00:31:14.561 [2024-12-06 17:47:06.523200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.561 [2024-12-06 17:47:06.523312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.561 [2024-12-06 17:47:06.523342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.561 [2024-12-06 17:47:06.527858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.561 [2024-12-06 17:47:06.527935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.561 [2024-12-06 17:47:06.527960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.561 [2024-12-06 17:47:06.532169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.561 [2024-12-06 17:47:06.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.532263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.537858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.537911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.542680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.542747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.542763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.548451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.548671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.548691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.551919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.552303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.552321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.555276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.555497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.555518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.559564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.559794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.559811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.563277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.563496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.563517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.568344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.568538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.568555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.571958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.572158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.572176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.579063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.579267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.579284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.582767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.582958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.582976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.586454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.586544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.586560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.593507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.593694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.593712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.597351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.597530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.597547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.601278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.601454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.601471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.605811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.605989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.606010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.610036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.610214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.610231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.618248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.618526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.562 [2024-12-06 17:47:06.625179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.562 [2024-12-06 17:47:06.625365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.562 [2024-12-06 17:47:06.625383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.629330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.629509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.629526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.632384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.632554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.632572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.635027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.635179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.635201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.637546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.637703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.637725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.640073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.640225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.640246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.642608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.642780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.642800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.645441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.645609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.645632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.648931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.649083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.649104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.652691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.652891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.652910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.656882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.657107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.660799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.660971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.664372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.664550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.824 [2024-12-06 17:47:06.664572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.824 [2024-12-06 17:47:06.667604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.824 [2024-12-06 17:47:06.667792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.667812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.670922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.671098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.671115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.674495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.674656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.674673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.678090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.678288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.678306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.681400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.681589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.681606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.684849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.685029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.685049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.688138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.688350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.688368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.691468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.691666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.691685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.694715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.694858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.694880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.698337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.698481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.698500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.702359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.702563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.702585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.706022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.706161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.706178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.709514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.709657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.709676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.713045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.713199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.713219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.717268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.717404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.717425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.721505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.721674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.721691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.725238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.725415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.725435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.728583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.728794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.728815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.731849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.732047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.732064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.735267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.735490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.735508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.738549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.738743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.742335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.742509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.745766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.745957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.745976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.749165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.749343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.749361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.752518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.752707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.752727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.755838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.756031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.756050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.758981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.759131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.825 [2024-12-06 17:47:06.759153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.825 [2024-12-06 17:47:06.761940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.825 [2024-12-06 17:47:06.762129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.762148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.768488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.768775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.768793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.776557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.776801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.776820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.785175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.785425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.785444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.793766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.794012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.794031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.801737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.802136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.802154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.810802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.811095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.811112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.814899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.815066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.815084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.818315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.818462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.818481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.822352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.822489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.822509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.826697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.826835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.826852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.830098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.830257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.830276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.833697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.833873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.833895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.837473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.837631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.837655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.841256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.841455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.841476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.845013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.845154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.845174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.848834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.849001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.849018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.852740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.852940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.852959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.856589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.856749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.856770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.860486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.860651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.860668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.864225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.864406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.864423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.868087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.868299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.868315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.871823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.871980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.871998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.874955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.875108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.875128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.877454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.877598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.877618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.880114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.880341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.880362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.883454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.883590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.883610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:14.826 [2024-12-06 17:47:06.886947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:14.826 [2024-12-06 17:47:06.887085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.826 [2024-12-06 17:47:06.887106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.890818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.890954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.890973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.894290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.894465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.897548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.897699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.897720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.900410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.900555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.900577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.902893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.903050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.903070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.905373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.905524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.905544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.907910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.908065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.908087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.910404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.910551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.910578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.912883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.913027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.915379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.915529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.915548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.917910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.918066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.920361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.920509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.920528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.922875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.923022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.923043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.925369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.925515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.925535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.928081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.928224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.928243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.930843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.930996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.931017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.933339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.933482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.933500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.935882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.936041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.936062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.938387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.938533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.938554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.940874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.941015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.941036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.944072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.944221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.944242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.951365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.951651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.951669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.959891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.960194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.960212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.969461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.969750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.977605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.088 [2024-12-06 17:47:06.977709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.088 [2024-12-06 17:47:06.977727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.088 [2024-12-06 17:47:06.983855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:06.984137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:06.984159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:06.994295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:06.994608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:06.994630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.002851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.002946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.002967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.005690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.005784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.005803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.008181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.008255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.008274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.011040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.011178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.011200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.015789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.016064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.016085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.024635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.024784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.024805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.028547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.028617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.028642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.032718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.032830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.036524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.036660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.036679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.040854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.040908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.040926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.044297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.044341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.044356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.047485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.047528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.047549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.050241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.050308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.050328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.053797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.053872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.053890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.056615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.056698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.056719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.059146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.059204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.059222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.061688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.061773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.061792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.064176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.064228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.064247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.067416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.067507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.067529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.070813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.070868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.070889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.074938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.075005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.075023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.078629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.078698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.078714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.085592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.085856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.085873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.095991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.096171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.089 [2024-12-06 17:47:07.106267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.089 [2024-12-06 17:47:07.106537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.089 [2024-12-06 17:47:07.106553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.090 [2024-12-06 17:47:07.116319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.090 [2024-12-06 17:47:07.116613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.090 [2024-12-06 17:47:07.116630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.090 [2024-12-06 17:47:07.126678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.090 [2024-12-06 17:47:07.126872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.090 [2024-12-06 17:47:07.126893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.090 [2024-12-06 17:47:07.136684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.090 [2024-12-06 17:47:07.136936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.090 [2024-12-06 17:47:07.136952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.090 [2024-12-06 17:47:07.147044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.090 [2024-12-06 17:47:07.147279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.090 [2024-12-06 17:47:07.147295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.157579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.157903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.157920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.168279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.168565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.168582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.178426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.178702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.178729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.187736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.187997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.188019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.192083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.192147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.192169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.194555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.194634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.194662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.197043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.197137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.197157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.199804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.199866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.199887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.203227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.203297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.203313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.211305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.211579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.211597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.220533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.220761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.220778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.228233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.228305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.228321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.234061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.234113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.234129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.238591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.238645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.238661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.242394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.242437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.242456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.245840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.245903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.245922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.350 [2024-12-06 17:47:07.251054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.350 [2024-12-06 17:47:07.251132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.350 [2024-12-06 17:47:07.251149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.255493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.255540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.255556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.261386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.261441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.261461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.265259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.265336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.265357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.267933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.268006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.268028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.270410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.270483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.270502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.272923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.272993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.273012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.275413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.275464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.275484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.277911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.277974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.277993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.280773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.280845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.280864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.283330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.283408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.283428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.285815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.285862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.285880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.288270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.288371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.288392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.290805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.290871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.290893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.293548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.293593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.293610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.297678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.297766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.297782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.302674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.302755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.302772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.306968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.307033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.307049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.314996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.315279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.315297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.325212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.325462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.325478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.335264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.335509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.335526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.345746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.351 [2024-12-06 17:47:07.346062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.351 [2024-12-06 17:47:07.346078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.351 [2024-12-06 17:47:07.355966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.352 [2024-12-06 17:47:07.356268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.352 [2024-12-06 17:47:07.356286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.352 [2024-12-06 17:47:07.366164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.352 [2024-12-06 17:47:07.366380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.352 [2024-12-06 17:47:07.366396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.352 [2024-12-06 17:47:07.376603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.352 [2024-12-06 17:47:07.376861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.352 [2024-12-06 17:47:07.376879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.352 [2024-12-06 17:47:07.386892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.352 [2024-12-06 17:47:07.387152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.352 [2024-12-06 17:47:07.387169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.352 [2024-12-06 17:47:07.396756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.352 [2024-12-06 17:47:07.397032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.352 [2024-12-06 17:47:07.397047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.352 [2024-12-06 17:47:07.407117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.352 [2024-12-06 17:47:07.407381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.352 [2024-12-06 17:47:07.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.612 [2024-12-06 17:47:07.416800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.612 [2024-12-06 17:47:07.417100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.612 [2024-12-06 17:47:07.417117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.612 [2024-12-06 17:47:07.426978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.612 [2024-12-06 17:47:07.427230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.612 [2024-12-06 17:47:07.427245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.612 [2024-12-06 17:47:07.436260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.612 [2024-12-06 17:47:07.436535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.612 [2024-12-06 17:47:07.436555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.612 [2024-12-06 17:47:07.442851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.445367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.445486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.445506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.447834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.447930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.447951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.450308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.450427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.450447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.452813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.452909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.452930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.455302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.455424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.458076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.458152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.458167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.460784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.460884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.460906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.463257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.463358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.463385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.465750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.465821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.465841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.468245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.468363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.468383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.471448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.471533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.471549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.476578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.476890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.476912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.482872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.483011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.483026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.486467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.486513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.489954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.490034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.490053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.493948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.494029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.494044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.501068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.501331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.501348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.510360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.510650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.510668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.613 6532.00 IOPS, 816.50 MiB/s [2024-12-06T16:47:07.679Z] [2024-12-06 17:47:07.519610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.519938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.519956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.528253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.528524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.528541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.536370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.536473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.536491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.613 [2024-12-06 17:47:07.540750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.613 [2024-12-06 17:47:07.540860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.613 [2024-12-06 17:47:07.540877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.543833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.543942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.543963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.546581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.546696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.546713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.549329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.549436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.549457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.552743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.552886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.552908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.555891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.556051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.556069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.560614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.560880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.560897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.570503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.570771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.570787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.579411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.579714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.579732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.585808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.585990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.586009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.589205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.589307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.589323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.591869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.592004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.592023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.594527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.594684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.594706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.597857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.597924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.597943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.601237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.601398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.601415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.604352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.604487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.604508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.606834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.606959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.606976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.609538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.609659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.609681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.612559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.612671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.612688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.616231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.616359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.616379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.619076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.619183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.619202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.621879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.622015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.622035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.624613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.624722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.624738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.627330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.627435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.627451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.630093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.630206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.614 [2024-12-06 17:47:07.630222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.614 [2024-12-06 17:47:07.632871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.614 [2024-12-06 17:47:07.632983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.632999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.635651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.635755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.635772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.638469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.638574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.638595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.641259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.641363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.641384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.645189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.645347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.645367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.654260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.654500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.654519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.663193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.663517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.663537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.615 [2024-12-06 17:47:07.671611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.615 [2024-12-06 17:47:07.672053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.615 [2024-12-06 17:47:07.672072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.679917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.680174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.689601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.689851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.689869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.698234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.698516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.698534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.706677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.706929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.706947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.715979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.716279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.716296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.724271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.724545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.724572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.728730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.875 [2024-12-06 17:47:07.728802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.875 [2024-12-06 17:47:07.728824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.875 [2024-12-06 17:47:07.731258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.731308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.731326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.733746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.733797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.733819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.736395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.736449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.736470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.738971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.739027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.739045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.742049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.742151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.742168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.746173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.746452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.746469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.756155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.756436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.756459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.760015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.760116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.760137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.763215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.763291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.763311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.765824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.765879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.765900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.768327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.768381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.768398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.770829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.770899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.770920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.773454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.773523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.773544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.776341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.776398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.776417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.778836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.778903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.781304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.781350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.781367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.783855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.783914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.783933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.786362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.786425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.786444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.788853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.788910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.788928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.791690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.791739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.791757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.794379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.876 [2024-12-06 17:47:07.794439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.876 [2024-12-06 17:47:07.794458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.876 [2024-12-06 17:47:07.796852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.796898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.796918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.799604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.799681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.799702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.802924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.802974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.802990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.806287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.806372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.806391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.809989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.810038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.810057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.813186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.813305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.813326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.816849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.816896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.816913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.820283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.820328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.820343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.824337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.824406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.828043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.828118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.828134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.831677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.831781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.834839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.834903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.834922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.837717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.837773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.837791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.840621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.840688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.840705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.843584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.843630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.843652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.846375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.846428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.846447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.848887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.848949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.848970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.851376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.851424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.851444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.853836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.853895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.853914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.856554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.856619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.856648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.859244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.877 [2024-12-06 17:47:07.859313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.877 [2024-12-06 17:47:07.859334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.877 [2024-12-06 17:47:07.862279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.862359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.862381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.865406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.865457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.865473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.869207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.869320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.869336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.872958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.873065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.873085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.878383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.878630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.878652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.888699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.888817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.888834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.898665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.898919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.898937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.908853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.909137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.909155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.918957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.919238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.919259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:15.878 [2024-12-06 17:47:07.929413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:15.878 [2024-12-06 17:47:07.929691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:15.878 [2024-12-06 17:47:07.929707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.939861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.940193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.940210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.950086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.950360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.950378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.960178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.960504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.960521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.970817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.971037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.971053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.979994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.980301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.980319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.988913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.989146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.989162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:07.998680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:07.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:07.999018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.004523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.004589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.004606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.007091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.007147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.007167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.009579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.009631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.009657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.012071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.012121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.012142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.014561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.014609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.014630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.017019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.017090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.019492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.019546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.019568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.021978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.022034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.022054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.024488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.024555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.024578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.139 [2024-12-06 17:47:08.027147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.139 [2024-12-06 17:47:08.027230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-12-06 17:47:08.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.030320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.030422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.030443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.039834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.040083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.040105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.049348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.049663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.058407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.058696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.058717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.066973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.067043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.067065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.069497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.069562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.069583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.071992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.072054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.074585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.074682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.074707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.077791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.077865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.077886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.087255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.087508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.087530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.095550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.095851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.095870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.102687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.102772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.102787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.107111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.107344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.107361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.112687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.112749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.112769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.117560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.117606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.117624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.121053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.121099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.121116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.125098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.125198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.125214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.129441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.129487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.129504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.133308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.133352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.133368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.137273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.137396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.141323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.141372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.141393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.145610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.145701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.145717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.149059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.149123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.149143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.151753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.140 [2024-12-06 17:47:08.151811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.140 [2024-12-06 17:47:08.151832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.140 [2024-12-06 17:47:08.154221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.154274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.154294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.156741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.156790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.156811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.159250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.159301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.159319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.161753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.161800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.161819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.164267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.164331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.164353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.166916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.166964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.166980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.170059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.170101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.170117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.174263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.174364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.174380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.178343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.178393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.178412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.181959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.182015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.182038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.186166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.186226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.189778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.189823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.189839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.194704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.194789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.141 [2024-12-06 17:47:08.198445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.141 [2024-12-06 17:47:08.198530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.141 [2024-12-06 17:47:08.198545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.205530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.205601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.205617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.211037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.211104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.211119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.215244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.215288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.215304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.218575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.218621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.218641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.221399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.221458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.221476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.223998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.224067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.224089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.226497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.226549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.226569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.228973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.229034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.229054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.232030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.232143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.232163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.235156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.235222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.235242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.239409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.239488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.239503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.248965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.249203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.249221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.257618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.257878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.257896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.266479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.266713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.266734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.275426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.275733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.283650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.283963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.283980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.291975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.292232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.292247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.300526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.300796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.300812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.309830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.310166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.310183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.318836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.318907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.318923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.403 [2024-12-06 17:47:08.323790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.403 [2024-12-06 17:47:08.323840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.403 [2024-12-06 17:47:08.323859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.326281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.326333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.326354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.328778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.328830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.328852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.331240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.331297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.331319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.333961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.334032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.334052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.336565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.336615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.336633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.339030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.339093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.339113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.341508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.341559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.341577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.344040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.344101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.344117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.347052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.347129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.347146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.349572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.349633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.349661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.352569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.352645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.352662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.359795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.360071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.360087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.368986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.369276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.369297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.377340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.377658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.377681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.384131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.384265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.384283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.388524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.388837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.397379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.397660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.397676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.407118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.407448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.407466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.415820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.416122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.416139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.420609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.420690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.423118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.423184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.423204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.425621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.425678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.425699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.428136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.428189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.428210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.430610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.430676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.430699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.433145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.433230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.433252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.404 [2024-12-06 17:47:08.435647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.404 [2024-12-06 17:47:08.435698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.404 [2024-12-06 17:47:08.435717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.405 [2024-12-06 17:47:08.438300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.405 [2024-12-06 17:47:08.438378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.405 [2024-12-06 17:47:08.438403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.405 [2024-12-06 17:47:08.441673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.405 [2024-12-06 17:47:08.441911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.405 [2024-12-06 17:47:08.441932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.405 [2024-12-06 17:47:08.451567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.405 [2024-12-06 17:47:08.451877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.405 [2024-12-06 17:47:08.451899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.405 [2024-12-06 17:47:08.460368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.405 [2024-12-06 17:47:08.460424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.405 [2024-12-06 17:47:08.460444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.405 [2024-12-06 17:47:08.466379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.405 [2024-12-06 17:47:08.466439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.405 [2024-12-06 17:47:08.466460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.469254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.469304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.469320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.471943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.472008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.472029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.474384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.474434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.474454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.476849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.476897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.476917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.479336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.479400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.479420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.481988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.482049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.482069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.666 [2024-12-06 17:47:08.484728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.666 [2024-12-06 17:47:08.484781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.666 [2024-12-06 17:47:08.484801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.487196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.487249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.487266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.489682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.489732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.489754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.492173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.492232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.492253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.494620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.494687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.494705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.497116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.497173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.497191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.500249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.500334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.500354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.507500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.507736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.507755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:16.667 [2024-12-06 17:47:08.517207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18261f0) with pdu=0x200016eff3c8 00:31:16.667 [2024-12-06 17:47:08.517448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.667 [2024-12-06 17:47:08.517469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:16.667 6620.50 IOPS, 827.56 MiB/s 00:31:16.667 Latency(us) 00:31:16.667 [2024-12-06T16:47:08.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.667 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:16.667 nvme0n1 : 2.01 6611.95 826.49 0.00 0.00 2414.33 1146.88 11523.41 00:31:16.667 [2024-12-06T16:47:08.733Z] =================================================================================================================== 00:31:16.667 [2024-12-06T16:47:08.733Z] Total : 6611.95 826.49 0.00 0.00 2414.33 1146.88 11523.41 00:31:16.667 { 00:31:16.667 "results": [ 00:31:16.667 { 00:31:16.667 "job": "nvme0n1", 00:31:16.667 "core_mask": "0x2", 00:31:16.667 "workload": "randwrite", 00:31:16.667 "status": "finished", 00:31:16.667 "queue_depth": 16, 00:31:16.667 "io_size": 131072, 00:31:16.667 "runtime": 2.005005, 00:31:16.667 "iops": 6611.953586150658, 00:31:16.667 "mibps": 826.4941982688323, 00:31:16.667 "io_failed": 0, 00:31:16.667 "io_timeout": 0, 00:31:16.667 "avg_latency_us": 2414.327883130925, 00:31:16.667 "min_latency_us": 1146.88, 00:31:16.667 "max_latency_us": 11523.413333333334 00:31:16.667 } 00:31:16.667 ], 00:31:16.667 "core_count": 1 00:31:16.667 } 00:31:16.667 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:16.667 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:16.667 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:16.667 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:16.667 | .driver_specific 00:31:16.667 | .nvme_error 00:31:16.667 | .status_code 00:31:16.667 | .command_transient_transport_error' 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 428 > 0 )) 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1729809 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1729809 ']' 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1729809 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729809 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729809' 00:31:16.927 killing process with pid 1729809 00:31:16.927 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1729809 00:31:16.927 Received shutdown signal, test time was about 2.000000 seconds 00:31:16.927 00:31:16.928 Latency(us) 00:31:16.928 [2024-12-06T16:47:08.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.928 [2024-12-06T16:47:08.994Z] =================================================================================================================== 00:31:16.928 [2024-12-06T16:47:08.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1729809 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1729596 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1729596 ']' 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1729596 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729596 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729596' 00:31:16.928 killing process with pid 1729596 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1729596 00:31:16.928 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1729596 00:31:17.188 00:31:17.188 real 0m16.655s 00:31:17.188 user 0m32.880s 00:31:17.188 sys 0m3.716s 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:17.188 ************************************ 00:31:17.188 END TEST nvmf_digest_error 00:31:17.188 ************************************ 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.188 rmmod nvme_tcp 00:31:17.188 rmmod nvme_fabrics 00:31:17.188 rmmod nvme_keyring 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1729596 ']' 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1729596 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1729596 ']' 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1729596 00:31:17.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1729596) - No such process 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1729596 is not found' 00:31:17.188 Process with pid 1729596 is not found 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.188 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.189 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.189 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.189 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.730 00:31:19.730 real 0m43.335s 00:31:19.730 user 1m8.225s 00:31:19.730 sys 0m13.213s 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:19.730 ************************************ 00:31:19.730 END TEST nvmf_digest 00:31:19.730 ************************************ 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.730 17:47:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.730 ************************************ 00:31:19.730 START TEST nvmf_bdevperf 00:31:19.731 ************************************ 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:19.731 * Looking for test storage... 00:31:19.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.731 --rc genhtml_branch_coverage=1 00:31:19.731 --rc genhtml_function_coverage=1 00:31:19.731 --rc genhtml_legend=1 00:31:19.731 --rc geninfo_all_blocks=1 00:31:19.731 --rc geninfo_unexecuted_blocks=1 00:31:19.731 00:31:19.731 ' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.731 --rc genhtml_branch_coverage=1 00:31:19.731 --rc genhtml_function_coverage=1 00:31:19.731 --rc genhtml_legend=1 00:31:19.731 --rc geninfo_all_blocks=1 00:31:19.731 --rc geninfo_unexecuted_blocks=1 00:31:19.731 00:31:19.731 ' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.731 --rc genhtml_branch_coverage=1 00:31:19.731 --rc genhtml_function_coverage=1 00:31:19.731 --rc genhtml_legend=1 00:31:19.731 --rc geninfo_all_blocks=1 00:31:19.731 --rc geninfo_unexecuted_blocks=1 00:31:19.731 00:31:19.731 ' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.731 --rc genhtml_branch_coverage=1 00:31:19.731 --rc genhtml_function_coverage=1 00:31:19.731 --rc genhtml_legend=1 00:31:19.731 --rc geninfo_all_blocks=1 00:31:19.731 --rc geninfo_unexecuted_blocks=1 00:31:19.731 00:31:19.731 ' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.731 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:19.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.732 17:47:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:26.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:26.535 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:26.536 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:26.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:26.536 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.536 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.797 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:31:26.797 00:31:26.797 --- 10.0.0.2 ping statistics --- 00:31:26.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.797 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:31:27.058 00:31:27.058 --- 10.0.0.1 ping statistics --- 00:31:27.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.058 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1732322 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1732322 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1732322 ']' 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.058 17:47:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.058 [2024-12-06 17:47:18.994676] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:27.058 [2024-12-06 17:47:18.994749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.058 [2024-12-06 17:47:19.093809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:27.319 [2024-12-06 17:47:19.145910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.319 [2024-12-06 17:47:19.145960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.319 [2024-12-06 17:47:19.145969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.319 [2024-12-06 17:47:19.145976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.319 [2024-12-06 17:47:19.145982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.319 [2024-12-06 17:47:19.147767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.319 [2024-12-06 17:47:19.148046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.319 [2024-12-06 17:47:19.148047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.892 [2024-12-06 17:47:19.864276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.892 Malloc0 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.892 [2024-12-06 17:47:19.936878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.892 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.892 { 00:31:27.892 "params": { 00:31:27.892 "name": "Nvme$subsystem", 00:31:27.892 "trtype": "$TEST_TRANSPORT", 00:31:27.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.892 "adrfam": "ipv4", 00:31:27.892 "trsvcid": "$NVMF_PORT", 00:31:27.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.892 "hdgst": ${hdgst:-false}, 00:31:27.892 "ddgst": ${ddgst:-false} 00:31:27.893 }, 00:31:27.893 "method": "bdev_nvme_attach_controller" 00:31:27.893 } 00:31:27.893 EOF 00:31:27.893 )") 00:31:27.893 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:27.893 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:28.153 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:28.153 17:47:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:28.153 "params": { 00:31:28.153 "name": "Nvme1", 00:31:28.153 "trtype": "tcp", 00:31:28.153 "traddr": "10.0.0.2", 00:31:28.153 "adrfam": "ipv4", 00:31:28.153 "trsvcid": "4420", 00:31:28.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.153 "hdgst": false, 00:31:28.154 "ddgst": false 00:31:28.154 }, 00:31:28.154 "method": "bdev_nvme_attach_controller" 00:31:28.154 }' 00:31:28.154 [2024-12-06 17:47:19.997543] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:28.154 [2024-12-06 17:47:19.997605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732354 ] 00:31:28.154 [2024-12-06 17:47:20.103137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.154 [2024-12-06 17:47:20.156964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.745 Running I/O for 1 seconds... 00:31:29.683 10056.00 IOPS, 39.28 MiB/s 00:31:29.683 Latency(us) 00:31:29.683 [2024-12-06T16:47:21.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:29.683 Verification LBA range: start 0x0 length 0x4000 00:31:29.683 Nvme1n1 : 1.01 10128.20 39.56 0.00 0.00 12581.07 1297.07 14417.92 00:31:29.683 [2024-12-06T16:47:21.749Z] =================================================================================================================== 00:31:29.683 [2024-12-06T16:47:21.749Z] Total : 10128.20 39.56 0.00 0.00 12581.07 1297.07 14417.92 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1732382 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:29.683 { 00:31:29.683 "params": { 00:31:29.683 "name": "Nvme$subsystem", 00:31:29.683 "trtype": "$TEST_TRANSPORT", 00:31:29.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:29.683 "adrfam": "ipv4", 00:31:29.683 "trsvcid": "$NVMF_PORT", 00:31:29.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:29.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:29.683 "hdgst": ${hdgst:-false}, 00:31:29.683 "ddgst": ${ddgst:-false} 00:31:29.683 }, 00:31:29.683 "method": "bdev_nvme_attach_controller" 00:31:29.683 } 00:31:29.683 EOF 00:31:29.683 )") 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:29.683 17:47:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:29.683 "params": { 00:31:29.683 "name": "Nvme1", 00:31:29.683 "trtype": "tcp", 00:31:29.683 "traddr": "10.0.0.2", 00:31:29.683 "adrfam": "ipv4", 00:31:29.683 "trsvcid": "4420", 00:31:29.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:29.683 "hdgst": false, 00:31:29.683 "ddgst": false 00:31:29.683 }, 00:31:29.683 "method": "bdev_nvme_attach_controller" 00:31:29.683 }' 00:31:29.683 [2024-12-06 17:47:21.666412] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:29.683 [2024-12-06 17:47:21.666466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732382 ] 00:31:29.942 [2024-12-06 17:47:21.753322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.943 [2024-12-06 17:47:21.788033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.943 Running I/O for 15 seconds... 00:31:32.307 10924.00 IOPS, 42.67 MiB/s [2024-12-06T16:47:24.635Z] 10945.50 IOPS, 42.76 MiB/s [2024-12-06T16:47:24.635Z] 17:47:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1732322 00:31:32.569 17:47:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:32.569 [2024-12-06 17:47:24.629763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.629988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.629999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.569 [2024-12-06 17:47:24.630131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.569 [2024-12-06 17:47:24.630449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.569 [2024-12-06 17:47:24.630456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.630988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.630998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.570 [2024-12-06 17:47:24.631663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.570 [2024-12-06 17:47:24.631670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:32.571 [2024-12-06 17:47:24.631910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.571 [2024-12-06 17:47:24.631927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.571 [2024-12-06 17:47:24.631944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.571 [2024-12-06 17:47:24.631960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.571 [2024-12-06 17:47:24.631977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.631987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.571 [2024-12-06 17:47:24.631995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.632005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:32.571 [2024-12-06 17:47:24.632012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.571 [2024-12-06 17:47:24.632021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1039ea0 is same with the state(6) to be set 00:31:32.571 [2024-12-06 17:47:24.632031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:32.571 [2024-12-06 17:47:24.632037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:32.571 [2024-12-06 17:47:24.632044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105576 len:8 PRP1 0x0 PRP2 0x0 00:31:32.571 [2024-12-06 17:47:24.632051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.832 [2024-12-06 17:47:24.635660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.635713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.636485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.636502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.636511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.636739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.636963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.636972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.636981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.636989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.649899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.650468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.650507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.650519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.650777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.651004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.651012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.651021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.651029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.663916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.664512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.664551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.664562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.664815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.665042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.665051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.665059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.665067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.677757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.678345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.678386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.678397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.678651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.678878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.678887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.678895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.678903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.691801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.692328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.692366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.692377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.692620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.692859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.692870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.692879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.692887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.705776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.706386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.706427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.706438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.706697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.706924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.706933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.706941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.706950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.719631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.720220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.720262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.720274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.720518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.720755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.720765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.720772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.720781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.733470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.734029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.734051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.734059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.734282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.734504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.734512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.734519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.734526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.747430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.748018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.748039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.748047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.748269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.748492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.748505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.748513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.748519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.761415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.761976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.761996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.762004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.762225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.762447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.762455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.762463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.762470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.775276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.775941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.775992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.776005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.776254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.776483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.832 [2024-12-06 17:47:24.776492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.832 [2024-12-06 17:47:24.776500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.832 [2024-12-06 17:47:24.776508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.832 [2024-12-06 17:47:24.789229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.832 [2024-12-06 17:47:24.789799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.832 [2024-12-06 17:47:24.789854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.832 [2024-12-06 17:47:24.789867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.832 [2024-12-06 17:47:24.790121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.832 [2024-12-06 17:47:24.790350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.790359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.790367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.790382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.803173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.803853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.803915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.803928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.804186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.804416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.804426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.804434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.804443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.817157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.817816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.817878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.817891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.818149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.818379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.818389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.818397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.818406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.831132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.831794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.831855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.831868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.832125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.832354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.832364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.832373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.832382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.845122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.845830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.845892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.845905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.846162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.846392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.846402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.846410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.846419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.859133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.859709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.859740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.859749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.859976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.860200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.860210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.860218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.860226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.873145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.873800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.873862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.873875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.874133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.874362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.874372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.874380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.874389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:32.833 [2024-12-06 17:47:24.887111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:32.833 [2024-12-06 17:47:24.887764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.833 [2024-12-06 17:47:24.887829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:32.833 [2024-12-06 17:47:24.887842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:32.833 [2024-12-06 17:47:24.888108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:32.833 [2024-12-06 17:47:24.888337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:32.833 [2024-12-06 17:47:24.888349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:32.833 [2024-12-06 17:47:24.888357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:32.833 [2024-12-06 17:47:24.888366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:24.901143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.901910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.901972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.901986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.902244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.902475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.902485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.902494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.902503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:24.915060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.915632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.915669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.915678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.915905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.916129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.916139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.916147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.916156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:24.929081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.929748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.929812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.929826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.930086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.930315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.930332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.930341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.930349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:24.943095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.943834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.943896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.943909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.944167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.944397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.944407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.944415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.944424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 9852.67 IOPS, 38.49 MiB/s [2024-12-06T16:47:25.160Z] [2024-12-06 17:47:24.958624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.959360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.959422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.959434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.959708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.959939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.959949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.959958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.959967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:24.972690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.973420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.973482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.973495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.973769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.974000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.974009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.974017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.974036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:24.986745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:24.987412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:24.987474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:24.987488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:24.987761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:24.987992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:24.988002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:24.988010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:24.988019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:25.000731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:25.001326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:25.001355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:25.001364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:25.001590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:25.001826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:25.001837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:25.001845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.094 [2024-12-06 17:47:25.001853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.094 [2024-12-06 17:47:25.014594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.094 [2024-12-06 17:47:25.015329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.094 [2024-12-06 17:47:25.015392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.094 [2024-12-06 17:47:25.015404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.094 [2024-12-06 17:47:25.015679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.094 [2024-12-06 17:47:25.015909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.094 [2024-12-06 17:47:25.015918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.094 [2024-12-06 17:47:25.015927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.015936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.028647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.029246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.029273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.029282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.029507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.029741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.029752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.029760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.029768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.042682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.043133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.043158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.043166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.043390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.043614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.043623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.043631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.043652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.056572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.057081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.057090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.057315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.057538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.057546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.057554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.057561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.070472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.071040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.071065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.071073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.071303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.071527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.071537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.071544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.071551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.084465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.085191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.085204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.085462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.085707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.085717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.085725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.085734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.098438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.099141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.099202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.099215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.099474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.099718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.099729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.099737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.099746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.112469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.113238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.113250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.113509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.113757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.113776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.113785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.113795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.126381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.127036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.127068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.127077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.127304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.127527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.127536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.127544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.127551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.140276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.140972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.141034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.141047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.141307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.141540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.141552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.141560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.141569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.095 [2024-12-06 17:47:25.154330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.095 [2024-12-06 17:47:25.155016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.095 [2024-12-06 17:47:25.155078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.095 [2024-12-06 17:47:25.155091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.095 [2024-12-06 17:47:25.155349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.095 [2024-12-06 17:47:25.155579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.095 [2024-12-06 17:47:25.155589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.095 [2024-12-06 17:47:25.155597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.095 [2024-12-06 17:47:25.155613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.363 [2024-12-06 17:47:25.168367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.363 [2024-12-06 17:47:25.168883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.363 [2024-12-06 17:47:25.168914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.363 [2024-12-06 17:47:25.168923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.363 [2024-12-06 17:47:25.169149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.363 [2024-12-06 17:47:25.169372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.363 [2024-12-06 17:47:25.169382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.363 [2024-12-06 17:47:25.169389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.363 [2024-12-06 17:47:25.169397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.363 [2024-12-06 17:47:25.182348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.363 [2024-12-06 17:47:25.182994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.363 [2024-12-06 17:47:25.183056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.363 [2024-12-06 17:47:25.183069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.363 [2024-12-06 17:47:25.183327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.363 [2024-12-06 17:47:25.183557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.364 [2024-12-06 17:47:25.183566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.364 [2024-12-06 17:47:25.183574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.364 [2024-12-06 17:47:25.183583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.364 [2024-12-06 17:47:25.196291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.364 [2024-12-06 17:47:25.196977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.364 [2024-12-06 17:47:25.197040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.364 [2024-12-06 17:47:25.197052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.364 [2024-12-06 17:47:25.197311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.364 [2024-12-06 17:47:25.197541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.364 [2024-12-06 17:47:25.197550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.364 [2024-12-06 17:47:25.197558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.364 [2024-12-06 17:47:25.197567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.364 [2024-12-06 17:47:25.210288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.364 [2024-12-06 17:47:25.211016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.364 [2024-12-06 17:47:25.211078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.364 [2024-12-06 17:47:25.211091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.364 [2024-12-06 17:47:25.211350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.364 [2024-12-06 17:47:25.211579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.364 [2024-12-06 17:47:25.211590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.364 [2024-12-06 17:47:25.211599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.364 [2024-12-06 17:47:25.211608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.364 [2024-12-06 17:47:25.224330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.364 [2024-12-06 17:47:25.225039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.364 [2024-12-06 17:47:25.225101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.364 [2024-12-06 17:47:25.225114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.365 [2024-12-06 17:47:25.225373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.365 [2024-12-06 17:47:25.225602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.365 [2024-12-06 17:47:25.225612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.365 [2024-12-06 17:47:25.225620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.365 [2024-12-06 17:47:25.225629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.365 [2024-12-06 17:47:25.238354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.365 [2024-12-06 17:47:25.239045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.365 [2024-12-06 17:47:25.239107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.365 [2024-12-06 17:47:25.239119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.365 [2024-12-06 17:47:25.239377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.365 [2024-12-06 17:47:25.239607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.365 [2024-12-06 17:47:25.239617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.365 [2024-12-06 17:47:25.239625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.365 [2024-12-06 17:47:25.239634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.365 [2024-12-06 17:47:25.252376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.365 [2024-12-06 17:47:25.253045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.365 [2024-12-06 17:47:25.253107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.365 [2024-12-06 17:47:25.253119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.253393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.253623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.253632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.253655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.253664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.266372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.267007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.267070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.267083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.267341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.267571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.267580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.267589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.267598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.280342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.281087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.281149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.281162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.281420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.281664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.281675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.281683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.281692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.294399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.295138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.295199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.295212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.295470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.295715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.295733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.295742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.295750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.308258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.308952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.309014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.309026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.309285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.309514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.309524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.309534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.309543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.322268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.322982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.323044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.323057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.323315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.323545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.323555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.323563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.323572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.336297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.337022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.337083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.337096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.337355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.337584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.337594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.337602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.337618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.350354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.351048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.351110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.351124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.351381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.351611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.366 [2024-12-06 17:47:25.351621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.366 [2024-12-06 17:47:25.351629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.366 [2024-12-06 17:47:25.351651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.366 [2024-12-06 17:47:25.364395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.366 [2024-12-06 17:47:25.365066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.366 [2024-12-06 17:47:25.365127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.366 [2024-12-06 17:47:25.365140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.366 [2024-12-06 17:47:25.365399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.366 [2024-12-06 17:47:25.365628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.367 [2024-12-06 17:47:25.365653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.367 [2024-12-06 17:47:25.365663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.367 [2024-12-06 17:47:25.365672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.367 [2024-12-06 17:47:25.378440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.367 [2024-12-06 17:47:25.379136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.367 [2024-12-06 17:47:25.379198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.367 [2024-12-06 17:47:25.379212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.367 [2024-12-06 17:47:25.379471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.367 [2024-12-06 17:47:25.379711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.367 [2024-12-06 17:47:25.379722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.367 [2024-12-06 17:47:25.379731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.367 [2024-12-06 17:47:25.379741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.367 [2024-12-06 17:47:25.392458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.367 [2024-12-06 17:47:25.393156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.367 [2024-12-06 17:47:25.393218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.367 [2024-12-06 17:47:25.393232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.367 [2024-12-06 17:47:25.393490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.367 [2024-12-06 17:47:25.393735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.367 [2024-12-06 17:47:25.393747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.367 [2024-12-06 17:47:25.393756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.367 [2024-12-06 17:47:25.393765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.367 [2024-12-06 17:47:25.406593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.367 [2024-12-06 17:47:25.407199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.367 [2024-12-06 17:47:25.407261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.367 [2024-12-06 17:47:25.407274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.367 [2024-12-06 17:47:25.407532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.367 [2024-12-06 17:47:25.407776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.367 [2024-12-06 17:47:25.407787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.367 [2024-12-06 17:47:25.407796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.367 [2024-12-06 17:47:25.407805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.367 [2024-12-06 17:47:25.420566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.367 [2024-12-06 17:47:25.421229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.367 [2024-12-06 17:47:25.421258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.367 [2024-12-06 17:47:25.421267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.367 [2024-12-06 17:47:25.421494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.367 [2024-12-06 17:47:25.421729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.367 [2024-12-06 17:47:25.421740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.367 [2024-12-06 17:47:25.421748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.367 [2024-12-06 17:47:25.421756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.434482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.435083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.435108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.435116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.435349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.435572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.435580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.435588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.435595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.447286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.447927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.447978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.447988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.448171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.448330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.448337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.448343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.448350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.459998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.460590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.460647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.460658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.460839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.460998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.461004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.461010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.461016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.472803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.473423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.473467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.473476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.473666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.473824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.473837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.473842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.473848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.485451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.486027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.486069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.486078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.486254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.486412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.486419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.486426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.486432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.498186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.498797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.498806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.498980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.499136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.499143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.499149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.499155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.510903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.511440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.511477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.511486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.511667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.511824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.511831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.511836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.511846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.523582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.523956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.523974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.523980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.524135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.524288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.524294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.630 [2024-12-06 17:47:25.524299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.630 [2024-12-06 17:47:25.524304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.630 [2024-12-06 17:47:25.536354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.630 [2024-12-06 17:47:25.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.630 [2024-12-06 17:47:25.536851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.630 [2024-12-06 17:47:25.536857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.630 [2024-12-06 17:47:25.537010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.630 [2024-12-06 17:47:25.537163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.630 [2024-12-06 17:47:25.537169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.537174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.537179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.549057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.549532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.549545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.549551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.549708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.549861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.549867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.549872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.549877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.561741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.562236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.562248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.562254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.562406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.562558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.562564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.562569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.562573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.574449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.575002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.575033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.575042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.575210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.575366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.575372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.575377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.575383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.587120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.587714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.587745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.587753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.587924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.588080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.588086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.588092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.588097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.599838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.600383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.600413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.600422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.600594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.600756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.600764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.600769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.600775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.612505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.613053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.613083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.613091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.613259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.613414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.613420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.613426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.613431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.625165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.625683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.625699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.625704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.625857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.626009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.626015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.626020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.626025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.637917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.638413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.638427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.638433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.638585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.638742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.638751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.638757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.638761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.650640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.651176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.651206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.651215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.651383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.651539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.651546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.651552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.651557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.663415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.663962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.663992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.664001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.664169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.664324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.664331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.664336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.664341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.676080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.676626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.676662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.676670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.676840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.676996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.677002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.677008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.677017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.631 [2024-12-06 17:47:25.688744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.631 [2024-12-06 17:47:25.689309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.631 [2024-12-06 17:47:25.689339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.631 [2024-12-06 17:47:25.689348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.631 [2024-12-06 17:47:25.689516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.631 [2024-12-06 17:47:25.689679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.631 [2024-12-06 17:47:25.689686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.631 [2024-12-06 17:47:25.689691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.631 [2024-12-06 17:47:25.689697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.701436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.702043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.702073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.702082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.702250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.702406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.702412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.702417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.702423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.714158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.714728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.714758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.714766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.714937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.715093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.715099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.715106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.715112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.726846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.727412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.727442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.727451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.727619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.727782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.727790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.727795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.727801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.739531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.740020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.740050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.740059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.740227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.740382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.740389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.740394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.740400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.752288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.752885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.752915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.752924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.753092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.753247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.753253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.753258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.753264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.764998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.765562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.765592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.765601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.765780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.765936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.765942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.765948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.765954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.777697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.778298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.778327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.778336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.778504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.778668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.778675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.778680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.778686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.790464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.791065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.791095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.791104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.791272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.791427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.791434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.891 [2024-12-06 17:47:25.791439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.891 [2024-12-06 17:47:25.791445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.891 [2024-12-06 17:47:25.803203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.891 [2024-12-06 17:47:25.803750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.891 [2024-12-06 17:47:25.803780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.891 [2024-12-06 17:47:25.803788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.891 [2024-12-06 17:47:25.803959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.891 [2024-12-06 17:47:25.804114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.891 [2024-12-06 17:47:25.804124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.804130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.804136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.815891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.816375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.816390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.816396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.816549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.816706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.816712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.816717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.816722] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.828605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.829102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.829115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.829121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.829273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.829424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.829430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.829436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.829441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.841351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.841812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.841826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.841831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.841983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.842135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.842141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.842146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.842153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.854047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.854534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.854547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.854552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.854710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.854863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.854869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.854874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.854879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.866765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.867303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.867333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.867341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.867509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.867675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.867682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.867688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.867694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.879435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.879995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.880025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.880034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.880202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.880357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.880364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.880369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.880374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.892115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.892713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.892722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.892890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.893045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.893052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.893057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.893062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.904800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.905428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.905458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.905466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.905634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.905798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.905805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.905811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.905816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.917546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.918135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.918164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.918173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.918341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.918496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.918503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.918508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.918514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.930249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.930853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.930883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.930892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.931063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.931219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.931225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.931230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.931236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:33.892 [2024-12-06 17:47:25.942978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:33.892 [2024-12-06 17:47:25.943556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.892 [2024-12-06 17:47:25.943586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:33.892 [2024-12-06 17:47:25.943595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:33.892 [2024-12-06 17:47:25.943770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:33.892 [2024-12-06 17:47:25.943926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:33.892 [2024-12-06 17:47:25.943932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:33.892 [2024-12-06 17:47:25.943937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:33.892 [2024-12-06 17:47:25.943943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 7389.50 IOPS, 28.87 MiB/s [2024-12-06T16:47:26.235Z] [2024-12-06 17:47:25.956819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:25.957385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:25.957415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:25.957423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:25.957591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:25.957754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:25.957761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:25.957766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:25.957772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:25.969503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:25.970107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:25.970137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:25.970146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:25.970314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:25.970469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:25.970480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:25.970485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:25.970491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:25.982248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:25.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:25.982794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:25.982802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:25.982973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:25.983128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:25.983135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:25.983141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:25.983147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:25.994888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:25.995465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:25.995495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:25.995504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:25.995678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:25.995834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:25.995840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:25.995845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:25.995851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.007580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.008152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.008182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.008191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.008359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.008514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.008521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.008526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.008535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.020273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.020869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.020899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.020908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.021076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.021232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.021238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.021243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.021249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.032984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.033547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.033576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.033585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.033760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.033916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.033923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.033928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.033933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.045693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.046193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.046223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.046231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.046399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.046554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.046561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.046566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.046572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.058457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.058942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.058957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.058963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.059116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.059268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.059274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.059279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.059284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.071153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.071739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.071770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.071778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.071949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.072104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.072111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.072117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.072122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.083864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.084430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.084460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.084469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.084644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.084800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.084807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.084812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.084818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.096545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.097155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.097194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.097366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.097522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.097528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.097533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.097539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.109272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.109767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.109797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.109806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.109976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.110132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.110138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.110144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.110150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.122028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.122513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.122528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.122534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.122691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.122845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.122850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.122855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.122861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.134729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.135209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.135222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.135227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.135379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.135531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.135540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.135545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.169 [2024-12-06 17:47:26.135549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.169 [2024-12-06 17:47:26.147423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.169 [2024-12-06 17:47:26.147990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.169 [2024-12-06 17:47:26.148021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.169 [2024-12-06 17:47:26.148029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.169 [2024-12-06 17:47:26.148197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.169 [2024-12-06 17:47:26.148353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.169 [2024-12-06 17:47:26.148360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.169 [2024-12-06 17:47:26.148366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.148371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.170 [2024-12-06 17:47:26.160125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.170 [2024-12-06 17:47:26.160726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.170 [2024-12-06 17:47:26.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.170 [2024-12-06 17:47:26.160765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.170 [2024-12-06 17:47:26.160936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.170 [2024-12-06 17:47:26.161092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.170 [2024-12-06 17:47:26.161098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.170 [2024-12-06 17:47:26.161104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.161109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.170 [2024-12-06 17:47:26.172859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.170 [2024-12-06 17:47:26.173360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.170 [2024-12-06 17:47:26.173375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.170 [2024-12-06 17:47:26.173381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.170 [2024-12-06 17:47:26.173533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.170 [2024-12-06 17:47:26.173691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.170 [2024-12-06 17:47:26.173697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.170 [2024-12-06 17:47:26.173702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.173711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.170 [2024-12-06 17:47:26.185577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.170 [2024-12-06 17:47:26.186166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.170 [2024-12-06 17:47:26.186196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.170 [2024-12-06 17:47:26.186205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.170 [2024-12-06 17:47:26.186374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.170 [2024-12-06 17:47:26.186529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.170 [2024-12-06 17:47:26.186535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.170 [2024-12-06 17:47:26.186541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.186546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.170 [2024-12-06 17:47:26.198287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.170 [2024-12-06 17:47:26.198937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.170 [2024-12-06 17:47:26.198967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.170 [2024-12-06 17:47:26.198976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.170 [2024-12-06 17:47:26.199144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.170 [2024-12-06 17:47:26.199299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.170 [2024-12-06 17:47:26.199306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.170 [2024-12-06 17:47:26.199311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.199317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.170 [2024-12-06 17:47:26.211054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.170 [2024-12-06 17:47:26.211622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.170 [2024-12-06 17:47:26.211657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.170 [2024-12-06 17:47:26.211666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.170 [2024-12-06 17:47:26.211837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.170 [2024-12-06 17:47:26.211992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.170 [2024-12-06 17:47:26.211999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.170 [2024-12-06 17:47:26.212004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.212010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.170 [2024-12-06 17:47:26.223741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.170 [2024-12-06 17:47:26.224322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.170 [2024-12-06 17:47:26.224351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.170 [2024-12-06 17:47:26.224360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.170 [2024-12-06 17:47:26.224528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.170 [2024-12-06 17:47:26.224690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.170 [2024-12-06 17:47:26.224697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.170 [2024-12-06 17:47:26.224703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.170 [2024-12-06 17:47:26.224709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.430 [2024-12-06 17:47:26.236470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.430 [2024-12-06 17:47:26.237066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.430 [2024-12-06 17:47:26.237096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.430 [2024-12-06 17:47:26.237105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.430 [2024-12-06 17:47:26.237273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.430 [2024-12-06 17:47:26.237428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.430 [2024-12-06 17:47:26.237434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.430 [2024-12-06 17:47:26.237439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.430 [2024-12-06 17:47:26.237445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.430 [2024-12-06 17:47:26.249185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.430 [2024-12-06 17:47:26.249680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.430 [2024-12-06 17:47:26.249696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.430 [2024-12-06 17:47:26.249702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.430 [2024-12-06 17:47:26.249861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.430 [2024-12-06 17:47:26.250014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.430 [2024-12-06 17:47:26.250020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.430 [2024-12-06 17:47:26.250025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.430 [2024-12-06 17:47:26.250030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.430 [2024-12-06 17:47:26.261927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.430 [2024-12-06 17:47:26.262425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.430 [2024-12-06 17:47:26.262438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.430 [2024-12-06 17:47:26.262444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.430 [2024-12-06 17:47:26.262600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.430 [2024-12-06 17:47:26.262759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.430 [2024-12-06 17:47:26.262766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.262771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.262776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.274647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.275210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.275240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.275248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.275417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.275572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.275579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.275584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.275590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.287320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.287932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.287961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.287970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.288138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.288293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.288300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.288306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.288311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.300047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.300625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.300661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.300670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.300841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.300996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.301006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.301011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.301017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.312751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.313319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.313348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.313357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.313525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.313688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.313695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.313700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.313706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.325436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.326054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.326085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.326093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.326261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.326416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.326423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.326428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.326434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.338167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.338738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.338767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.338776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.338947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.339102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.339108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.339114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.339124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.350869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.351396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.351426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.351435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.351603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.351766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.351773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.351779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.351784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.363510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.364026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.364056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.364065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.364233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.364388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.364395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.364400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.364406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.376298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.376947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.376977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.376986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.377153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.377309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.377315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.377320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.377326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.389055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.389623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.389659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.389668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.389839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.389994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.390001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.390006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.390012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.401744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.402318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.402348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.402357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.402525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.402687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.402695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.402700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.402706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.414434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.415041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.415071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.415080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.415250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.415405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.415412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.415417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.415423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.427160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.427751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.427781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.427790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.427964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.428119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.428126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.428132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.428137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.439872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.440351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.440366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.440371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.440524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.440682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.440689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.440694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.431 [2024-12-06 17:47:26.440698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.431 [2024-12-06 17:47:26.452580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.431 [2024-12-06 17:47:26.453134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.431 [2024-12-06 17:47:26.453164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.431 [2024-12-06 17:47:26.453173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.431 [2024-12-06 17:47:26.453343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.431 [2024-12-06 17:47:26.453499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.431 [2024-12-06 17:47:26.453506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.431 [2024-12-06 17:47:26.453511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.432 [2024-12-06 17:47:26.453517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.432 [2024-12-06 17:47:26.465277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.432 [2024-12-06 17:47:26.465855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.432 [2024-12-06 17:47:26.465885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.432 [2024-12-06 17:47:26.465893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.432 [2024-12-06 17:47:26.466062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.432 [2024-12-06 17:47:26.466217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.432 [2024-12-06 17:47:26.466227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.432 [2024-12-06 17:47:26.466233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.432 [2024-12-06 17:47:26.466239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.432 [2024-12-06 17:47:26.477984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.432 [2024-12-06 17:47:26.478560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.432 [2024-12-06 17:47:26.478590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.432 [2024-12-06 17:47:26.478600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.432 [2024-12-06 17:47:26.478776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.432 [2024-12-06 17:47:26.478932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.432 [2024-12-06 17:47:26.478939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.432 [2024-12-06 17:47:26.478944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.432 [2024-12-06 17:47:26.478950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.432 [2024-12-06 17:47:26.490685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.432 [2024-12-06 17:47:26.491245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.432 [2024-12-06 17:47:26.491275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.432 [2024-12-06 17:47:26.491284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.432 [2024-12-06 17:47:26.491452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.432 [2024-12-06 17:47:26.491608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.432 [2024-12-06 17:47:26.491614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.432 [2024-12-06 17:47:26.491619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.432 [2024-12-06 17:47:26.491625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.693 [2024-12-06 17:47:26.503364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.693 [2024-12-06 17:47:26.503938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.693 [2024-12-06 17:47:26.503968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.693 [2024-12-06 17:47:26.503977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.693 [2024-12-06 17:47:26.504145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.693 [2024-12-06 17:47:26.504301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.693 [2024-12-06 17:47:26.504307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.693 [2024-12-06 17:47:26.504313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.693 [2024-12-06 17:47:26.504327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.693 [2024-12-06 17:47:26.516069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.693 [2024-12-06 17:47:26.516548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.693 [2024-12-06 17:47:26.516563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.693 [2024-12-06 17:47:26.516568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.693 [2024-12-06 17:47:26.516725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.693 [2024-12-06 17:47:26.516878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.693 [2024-12-06 17:47:26.516883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.693 [2024-12-06 17:47:26.516889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.693 [2024-12-06 17:47:26.516894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.693 [2024-12-06 17:47:26.528769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.693 [2024-12-06 17:47:26.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.693 [2024-12-06 17:47:26.529233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.693 [2024-12-06 17:47:26.529239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.693 [2024-12-06 17:47:26.529390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.693 [2024-12-06 17:47:26.529542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.693 [2024-12-06 17:47:26.529548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.693 [2024-12-06 17:47:26.529552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.693 [2024-12-06 17:47:26.529557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.693 [2024-12-06 17:47:26.541420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.693 [2024-12-06 17:47:26.541951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.693 [2024-12-06 17:47:26.541981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.693 [2024-12-06 17:47:26.541990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.693 [2024-12-06 17:47:26.542158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.542313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.542319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.542324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.542330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.554075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.554559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.554573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.554579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.554737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.554890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.554896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.554901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.554905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.566849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.567418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.567448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.567457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.567624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.567786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.567793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.567799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.567804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.579540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.580102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.580132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.580140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.580308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.580464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.580470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.580476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.580481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.592216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.592775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.592805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.592814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.592988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.593144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.593150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.593156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.593161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.604897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.605461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.605491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.605499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.605676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.605832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.605838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.605844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.605849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.617575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.618126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.618156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.618165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.618335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.618491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.618497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.618503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.618508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.630245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.630721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.630737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.630742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.630895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.631048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.631057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.631062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.631067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.642937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.643503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.643533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.643542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.643717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.643873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.643879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.643885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.643891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.655626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.656225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.656254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.656263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.656431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.694 [2024-12-06 17:47:26.656587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.694 [2024-12-06 17:47:26.656593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.694 [2024-12-06 17:47:26.656598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.694 [2024-12-06 17:47:26.656604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.694 [2024-12-06 17:47:26.668361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.694 [2024-12-06 17:47:26.668968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.694 [2024-12-06 17:47:26.668997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.694 [2024-12-06 17:47:26.669006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.694 [2024-12-06 17:47:26.669175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.669331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.669337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.669343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.669352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.695 [2024-12-06 17:47:26.681105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.695 [2024-12-06 17:47:26.681725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.695 [2024-12-06 17:47:26.681755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.695 [2024-12-06 17:47:26.681764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.695 [2024-12-06 17:47:26.681934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.682090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.682096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.682102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.682108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.695 [2024-12-06 17:47:26.693900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.695 [2024-12-06 17:47:26.694379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.695 [2024-12-06 17:47:26.694393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.695 [2024-12-06 17:47:26.694398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.695 [2024-12-06 17:47:26.694551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.694710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.694717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.694722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.694727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.695 [2024-12-06 17:47:26.706593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.695 [2024-12-06 17:47:26.707197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.695 [2024-12-06 17:47:26.707227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.695 [2024-12-06 17:47:26.707236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.695 [2024-12-06 17:47:26.707404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.707559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.707565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.707571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.707576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.695 [2024-12-06 17:47:26.719301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.695 [2024-12-06 17:47:26.719880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.695 [2024-12-06 17:47:26.719909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.695 [2024-12-06 17:47:26.719918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.695 [2024-12-06 17:47:26.720086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.720241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.720247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.720252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.720258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.695 [2024-12-06 17:47:26.731993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.695 [2024-12-06 17:47:26.732557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.695 [2024-12-06 17:47:26.732587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.695 [2024-12-06 17:47:26.732595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.695 [2024-12-06 17:47:26.732770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.732926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.732933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.732938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.732944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.695 [2024-12-06 17:47:26.744675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.695 [2024-12-06 17:47:26.745243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.695 [2024-12-06 17:47:26.745273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.695 [2024-12-06 17:47:26.745282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.695 [2024-12-06 17:47:26.745450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.695 [2024-12-06 17:47:26.745605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.695 [2024-12-06 17:47:26.745612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.695 [2024-12-06 17:47:26.745617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.695 [2024-12-06 17:47:26.745622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.957 [2024-12-06 17:47:26.757367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.957 [2024-12-06 17:47:26.757975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.957 [2024-12-06 17:47:26.758005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.957 [2024-12-06 17:47:26.758014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.957 [2024-12-06 17:47:26.758185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.957 [2024-12-06 17:47:26.758341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.957 [2024-12-06 17:47:26.758347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.957 [2024-12-06 17:47:26.758353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.957 [2024-12-06 17:47:26.758358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.957 [2024-12-06 17:47:26.770096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.957 [2024-12-06 17:47:26.770679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.957 [2024-12-06 17:47:26.770709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.957 [2024-12-06 17:47:26.770717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.957 [2024-12-06 17:47:26.770885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.957 [2024-12-06 17:47:26.771041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.957 [2024-12-06 17:47:26.771047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.957 [2024-12-06 17:47:26.771052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.957 [2024-12-06 17:47:26.771058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.957 [2024-12-06 17:47:26.782802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.957 [2024-12-06 17:47:26.783375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.957 [2024-12-06 17:47:26.783405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.957 [2024-12-06 17:47:26.783414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.957 [2024-12-06 17:47:26.783581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.957 [2024-12-06 17:47:26.783744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.957 [2024-12-06 17:47:26.783752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.957 [2024-12-06 17:47:26.783757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.957 [2024-12-06 17:47:26.783763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.957 [2024-12-06 17:47:26.795492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.957 [2024-12-06 17:47:26.796058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.957 [2024-12-06 17:47:26.796087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.957 [2024-12-06 17:47:26.796096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.957 [2024-12-06 17:47:26.796264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.957 [2024-12-06 17:47:26.796419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.957 [2024-12-06 17:47:26.796429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.957 [2024-12-06 17:47:26.796435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.957 [2024-12-06 17:47:26.796440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.957 [2024-12-06 17:47:26.808172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.957 [2024-12-06 17:47:26.808736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.957 [2024-12-06 17:47:26.808766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.957 [2024-12-06 17:47:26.808775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.957 [2024-12-06 17:47:26.808943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.957 [2024-12-06 17:47:26.809098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.957 [2024-12-06 17:47:26.809104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.957 [2024-12-06 17:47:26.809110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.957 [2024-12-06 17:47:26.809115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.957 [2024-12-06 17:47:26.820852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.957 [2024-12-06 17:47:26.821416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.957 [2024-12-06 17:47:26.821445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.957 [2024-12-06 17:47:26.821454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.957 [2024-12-06 17:47:26.821622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.957 [2024-12-06 17:47:26.821784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.957 [2024-12-06 17:47:26.821791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.957 [2024-12-06 17:47:26.821797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.957 [2024-12-06 17:47:26.821803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.833529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.834128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.834158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.834167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.834335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.834491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.834498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.834503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.834513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.846248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.846753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.846782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.846791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.846962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.847117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.847123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.847129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.847134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.859023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.859522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.859537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.859543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.859701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.859854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.859860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.859866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.859870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.871730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.872225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.872237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.872243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.872395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.872547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.872552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.872557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.872562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.884464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.884860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.884873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.884879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.885031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.885183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.885189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.885194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.885199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.897210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.897859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.897889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.897897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.898065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.898221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.898227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.898232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.898238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.909973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.910529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.910560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.910569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.910745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.910900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.910907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.910913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.910919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.922646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.923213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.923243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.923253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.923424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.923580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.923587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.923593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.923599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.935330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.935944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.935974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.935982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.936150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.936306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.936312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.936318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.936323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 [2024-12-06 17:47:26.948058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.958 [2024-12-06 17:47:26.948521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.958 [2024-12-06 17:47:26.948535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.958 [2024-12-06 17:47:26.948541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.958 [2024-12-06 17:47:26.948699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.958 [2024-12-06 17:47:26.948852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.958 [2024-12-06 17:47:26.948858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.958 [2024-12-06 17:47:26.948863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.958 [2024-12-06 17:47:26.948868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.958 5911.60 IOPS, 23.09 MiB/s [2024-12-06T16:47:27.024Z] [2024-12-06 17:47:26.961742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.959 [2024-12-06 17:47:26.962322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.959 [2024-12-06 17:47:26.962352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.959 [2024-12-06 17:47:26.962361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.959 [2024-12-06 17:47:26.962529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.959 [2024-12-06 17:47:26.962690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.959 [2024-12-06 17:47:26.962701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.959 [2024-12-06 17:47:26.962706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.959 [2024-12-06 17:47:26.962712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.959 [2024-12-06 17:47:26.974443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.959 [2024-12-06 17:47:26.975004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.959 [2024-12-06 17:47:26.975033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.959 [2024-12-06 17:47:26.975042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.959 [2024-12-06 17:47:26.975210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.959 [2024-12-06 17:47:26.975365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.959 [2024-12-06 17:47:26.975372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.959 [2024-12-06 17:47:26.975377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.959 [2024-12-06 17:47:26.975382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.959 [2024-12-06 17:47:26.987127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.959 [2024-12-06 17:47:26.987731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.959 [2024-12-06 17:47:26.987761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.959 [2024-12-06 17:47:26.987770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.959 [2024-12-06 17:47:26.987940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.959 [2024-12-06 17:47:26.988096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.959 [2024-12-06 17:47:26.988102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.959 [2024-12-06 17:47:26.988108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.959 [2024-12-06 17:47:26.988114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.959 [2024-12-06 17:47:26.999851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.959 [2024-12-06 17:47:27.000417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.959 [2024-12-06 17:47:27.000446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.959 [2024-12-06 17:47:27.000455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.959 [2024-12-06 17:47:27.000622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.959 [2024-12-06 17:47:27.000785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.959 [2024-12-06 17:47:27.000792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.959 [2024-12-06 17:47:27.000798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.959 [2024-12-06 17:47:27.000807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:34.959 [2024-12-06 17:47:27.012535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:34.959 [2024-12-06 17:47:27.013113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.959 [2024-12-06 17:47:27.013143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:34.959 [2024-12-06 17:47:27.013152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:34.959 [2024-12-06 17:47:27.013319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:34.959 [2024-12-06 17:47:27.013475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:34.959 [2024-12-06 17:47:27.013481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:34.959 [2024-12-06 17:47:27.013486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:34.959 [2024-12-06 17:47:27.013492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.219 [2024-12-06 17:47:27.025233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.219 [2024-12-06 17:47:27.025726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.219 [2024-12-06 17:47:27.025755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.219 [2024-12-06 17:47:27.025764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.219 [2024-12-06 17:47:27.025935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.219 [2024-12-06 17:47:27.026090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.219 [2024-12-06 17:47:27.026096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.026102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.026107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.037990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.038476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.038490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.038496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.038656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.038810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.038816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.038821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.038826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.050693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.051151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.051163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.051169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.051321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.051473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.051479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.051483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.051488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.063357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.063996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.064027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.064035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.064203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.064359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.064365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.064371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.064376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.076118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.076581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.076610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.076619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.076797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.076953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.076959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.076965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.076971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.088881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.089451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.089481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.089492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.089669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.089825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.089831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.089837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.089842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.101567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.102166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.102197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.102205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.102373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.102529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.102535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.102541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.102546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.114280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.114783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.114813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.114822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.114992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.115148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.115154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.115160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.115166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.127041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.127612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.127646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.127655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.127822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.127978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.220 [2024-12-06 17:47:27.127991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.220 [2024-12-06 17:47:27.127997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.220 [2024-12-06 17:47:27.128003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.220 [2024-12-06 17:47:27.139728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.220 [2024-12-06 17:47:27.140292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.220 [2024-12-06 17:47:27.140322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.220 [2024-12-06 17:47:27.140331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.220 [2024-12-06 17:47:27.140499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.220 [2024-12-06 17:47:27.140660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.140668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.140673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.140679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.152415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.152978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.153008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.153016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.153185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.153340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.153346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.153351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.153357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.165093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.165575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.165590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.165595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.165754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.165907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.165913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.165918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.165927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.177791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.178353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.178383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.178392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.178562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.178725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.178733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.178738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.178744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.190486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.191091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.191121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.191130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.191298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.191453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.191460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.191465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.191471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.203221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.203779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.203811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.203820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.203992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.204148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.204154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.204160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.204166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.215909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.216481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.216511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.216520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.216694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.216850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.216856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.216861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.216867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.228595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.229197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.229227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.229236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.221 [2024-12-06 17:47:27.229403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.221 [2024-12-06 17:47:27.229559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.221 [2024-12-06 17:47:27.229565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.221 [2024-12-06 17:47:27.229571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.221 [2024-12-06 17:47:27.229576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.221 [2024-12-06 17:47:27.241311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.221 [2024-12-06 17:47:27.241838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.221 [2024-12-06 17:47:27.241868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.221 [2024-12-06 17:47:27.241876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.222 [2024-12-06 17:47:27.242044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.222 [2024-12-06 17:47:27.242200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.222 [2024-12-06 17:47:27.242206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.222 [2024-12-06 17:47:27.242212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.222 [2024-12-06 17:47:27.242217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.222 [2024-12-06 17:47:27.253961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.222 [2024-12-06 17:47:27.254494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.222 [2024-12-06 17:47:27.254523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.222 [2024-12-06 17:47:27.254532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.222 [2024-12-06 17:47:27.254710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.222 [2024-12-06 17:47:27.254866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.222 [2024-12-06 17:47:27.254872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.222 [2024-12-06 17:47:27.254878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.222 [2024-12-06 17:47:27.254883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.222 [2024-12-06 17:47:27.266612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.222 [2024-12-06 17:47:27.267143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.222 [2024-12-06 17:47:27.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.222 [2024-12-06 17:47:27.267182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.222 [2024-12-06 17:47:27.267350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.222 [2024-12-06 17:47:27.267506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.222 [2024-12-06 17:47:27.267512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.222 [2024-12-06 17:47:27.267517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.222 [2024-12-06 17:47:27.267523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.222 [2024-12-06 17:47:27.279260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.222 [2024-12-06 17:47:27.279829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.222 [2024-12-06 17:47:27.279859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.222 [2024-12-06 17:47:27.279868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.222 [2024-12-06 17:47:27.280036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.222 [2024-12-06 17:47:27.280191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.222 [2024-12-06 17:47:27.280198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.222 [2024-12-06 17:47:27.280203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.222 [2024-12-06 17:47:27.280208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.291979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.292548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.292578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.292587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.292764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.292920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.292931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.292937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.292942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.304666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.305234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.305264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.305273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.305440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.305596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.305602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.305608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.305613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.317347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.317953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.317983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.317992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.318160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.318315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.318321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.318327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.318332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.330068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.330635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.330670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.330678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.330846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.331002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.331008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.331014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.331023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.342757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.343332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.343362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.343371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.343539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.343701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.343709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.343714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.343720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.355469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.356075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.356105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.356114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.356282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.356437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.356443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.356449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.356454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.368198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.368762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.368792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.368801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.368972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.369128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.369134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.369140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.369146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.380890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.381388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.381402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.381408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.381560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.381719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.381725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.381730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.381735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.393597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.394079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.394093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.394098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.394250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.394402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.394408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.394412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.394417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.406301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.406786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.406816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.406824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.406995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.407150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.407156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.407161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.407167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.419043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.419607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.484 [2024-12-06 17:47:27.419643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.484 [2024-12-06 17:47:27.419653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.484 [2024-12-06 17:47:27.419828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.484 [2024-12-06 17:47:27.419983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.484 [2024-12-06 17:47:27.419990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.484 [2024-12-06 17:47:27.419995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.484 [2024-12-06 17:47:27.420001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.484 [2024-12-06 17:47:27.431731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.484 [2024-12-06 17:47:27.432178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.432208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.432216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.432384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.432539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.432546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.432552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.432557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.444452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.444929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.444945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.444950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.445103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.445256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.445261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.445266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.445271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.457149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.457732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.457762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.457771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.457942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.458097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.458107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.458112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.458118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.469849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.470416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.470446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.470455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.470623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.470786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.470793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.470798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.470804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.482540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.483142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.483151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.483318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.483474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.483480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.483486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.483491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.495229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.495828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.495858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.495867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.496035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.496190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.496196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.496202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.496211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.507972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.508550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.508579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.508588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.508763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.508919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.508926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.508931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.508937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.520672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.521236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.521266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.521276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.521444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.521600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.521606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.521611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.521617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.533372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.533977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.534007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.534016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.534184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.534339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.534346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.534351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.534357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.485 [2024-12-06 17:47:27.546094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.485 [2024-12-06 17:47:27.546633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.485 [2024-12-06 17:47:27.546668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.485 [2024-12-06 17:47:27.546677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.485 [2024-12-06 17:47:27.546845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.485 [2024-12-06 17:47:27.547000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.485 [2024-12-06 17:47:27.547006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.485 [2024-12-06 17:47:27.547012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.485 [2024-12-06 17:47:27.547018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.767 [2024-12-06 17:47:27.558763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.767 [2024-12-06 17:47:27.559330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.767 [2024-12-06 17:47:27.559360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.767 [2024-12-06 17:47:27.559369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.767 [2024-12-06 17:47:27.559538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.767 [2024-12-06 17:47:27.559700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.767 [2024-12-06 17:47:27.559707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.767 [2024-12-06 17:47:27.559713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.767 [2024-12-06 17:47:27.559719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.767 [2024-12-06 17:47:27.571452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.767 [2024-12-06 17:47:27.572036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.767 [2024-12-06 17:47:27.572067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.767 [2024-12-06 17:47:27.572075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.767 [2024-12-06 17:47:27.572243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.767 [2024-12-06 17:47:27.572398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.767 [2024-12-06 17:47:27.572405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.767 [2024-12-06 17:47:27.572410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.767 [2024-12-06 17:47:27.572416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.767 [2024-12-06 17:47:27.584094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.767 [2024-12-06 17:47:27.584666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.767 [2024-12-06 17:47:27.584696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.767 [2024-12-06 17:47:27.584705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.767 [2024-12-06 17:47:27.584877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.767 [2024-12-06 17:47:27.585032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.767 [2024-12-06 17:47:27.585039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.767 [2024-12-06 17:47:27.585044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.767 [2024-12-06 17:47:27.585050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.767 [2024-12-06 17:47:27.596783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.767 [2024-12-06 17:47:27.597353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.767 [2024-12-06 17:47:27.597383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.767 [2024-12-06 17:47:27.597392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.767 [2024-12-06 17:47:27.597560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.767 [2024-12-06 17:47:27.597720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.767 [2024-12-06 17:47:27.597727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.767 [2024-12-06 17:47:27.597733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.767 [2024-12-06 17:47:27.597738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.767 [2024-12-06 17:47:27.609481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.767 [2024-12-06 17:47:27.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.767 [2024-12-06 17:47:27.609957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.767 [2024-12-06 17:47:27.609962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.767 [2024-12-06 17:47:27.610115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.767 [2024-12-06 17:47:27.610267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.767 [2024-12-06 17:47:27.610273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.767 [2024-12-06 17:47:27.610278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.767 [2024-12-06 17:47:27.610283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.767 [2024-12-06 17:47:27.622157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.767 [2024-12-06 17:47:27.622631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.767 [2024-12-06 17:47:27.622648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.767 [2024-12-06 17:47:27.622654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.767 [2024-12-06 17:47:27.622806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.767 [2024-12-06 17:47:27.622958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.767 [2024-12-06 17:47:27.622967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.622973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.622977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1732322 Killed "${NVMF_APP[@]}" "$@" 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:35.768 [2024-12-06 17:47:27.634859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.635395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.635425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.635434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.635602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1732463 00:31:35.768 [2024-12-06 17:47:27.635763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.635771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.635776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.635781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1732463 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1732463 ']' 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.768 17:47:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:35.768 [2024-12-06 17:47:27.647553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.648039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.648054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.648059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.648212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.648370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.648377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.648382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.648387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 [2024-12-06 17:47:27.660283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.660655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.660669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.660675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.660827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.660979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.660985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.660990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.660995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 [2024-12-06 17:47:27.673021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.673505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.673518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.673523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.673680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.673832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.673838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.673843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.673848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 [2024-12-06 17:47:27.685738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.686193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.686206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.686211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.686363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.686515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.686521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.686529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.686534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 [2024-12-06 17:47:27.692119] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:35.768 [2024-12-06 17:47:27.692166] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.768 [2024-12-06 17:47:27.698421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.698995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.699025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.699034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.699202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.699357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.699364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.699369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.699375] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 [2024-12-06 17:47:27.711142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.711719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.711749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.711758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.711929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.712084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.712091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.712097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.768 [2024-12-06 17:47:27.712102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.768 [2024-12-06 17:47:27.723841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.768 [2024-12-06 17:47:27.724438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.768 [2024-12-06 17:47:27.724468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.768 [2024-12-06 17:47:27.724477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.768 [2024-12-06 17:47:27.724652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.768 [2024-12-06 17:47:27.724808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.768 [2024-12-06 17:47:27.724820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.768 [2024-12-06 17:47:27.724826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.724831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.736485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.737037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.737066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.737075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.737243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.737399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.737405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.737412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.737418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.749171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.749678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.749700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.749707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.749865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.750019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.750025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.750031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.750036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.761940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.762504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.762534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.762543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.762718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.762873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.762881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.762886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.762892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.774665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.775254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.775285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.775294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.775461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.775618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.775625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.775631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.775643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.782498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:35.769 [2024-12-06 17:47:27.787387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.788008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.788038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.788046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.788215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.788370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.788377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.788383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.788390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.800135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.800647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.800663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.800669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.800822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.800974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.800981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.800986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.800991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.811394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.769 [2024-12-06 17:47:27.811414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.769 [2024-12-06 17:47:27.811423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.769 [2024-12-06 17:47:27.811429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.769 [2024-12-06 17:47:27.811433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.769 [2024-12-06 17:47:27.812506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.769 [2024-12-06 17:47:27.812676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.769 [2024-12-06 17:47:27.812858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.769 [2024-12-06 17:47:27.812867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.813397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.813426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.813435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.813605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.813766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.813773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.813779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.813785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:35.769 [2024-12-06 17:47:27.825522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:35.769 [2024-12-06 17:47:27.826040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.769 [2024-12-06 17:47:27.826056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:35.769 [2024-12-06 17:47:27.826062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:35.769 [2024-12-06 17:47:27.826217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:35.769 [2024-12-06 17:47:27.826369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:35.769 [2024-12-06 17:47:27.826375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:35.769 [2024-12-06 17:47:27.826381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:35.769 [2024-12-06 17:47:27.826386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.031 [2024-12-06 17:47:27.838270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.031 [2024-12-06 17:47:27.838916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-06 17:47:27.838948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.031 [2024-12-06 17:47:27.838957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.031 [2024-12-06 17:47:27.839128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.031 [2024-12-06 17:47:27.839284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.031 [2024-12-06 17:47:27.839295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.031 [2024-12-06 17:47:27.839301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.031 [2024-12-06 17:47:27.839307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.031 [2024-12-06 17:47:27.851047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.031 [2024-12-06 17:47:27.851604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-06 17:47:27.851635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.031 [2024-12-06 17:47:27.851650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.031 [2024-12-06 17:47:27.851819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.031 [2024-12-06 17:47:27.851975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.031 [2024-12-06 17:47:27.851982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.031 [2024-12-06 17:47:27.851987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.031 [2024-12-06 17:47:27.851993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.031 [2024-12-06 17:47:27.863738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.031 [2024-12-06 17:47:27.864292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-06 17:47:27.864322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.031 [2024-12-06 17:47:27.864331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.031 [2024-12-06 17:47:27.864500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.031 [2024-12-06 17:47:27.864662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.031 [2024-12-06 17:47:27.864669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.031 [2024-12-06 17:47:27.864675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.031 [2024-12-06 17:47:27.864681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.031 [2024-12-06 17:47:27.876408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.031 [2024-12-06 17:47:27.877019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-06 17:47:27.877050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.031 [2024-12-06 17:47:27.877059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.031 [2024-12-06 17:47:27.877228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.031 [2024-12-06 17:47:27.877383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.031 [2024-12-06 17:47:27.877390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.031 [2024-12-06 17:47:27.877395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.031 [2024-12-06 17:47:27.877405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.031 [2024-12-06 17:47:27.889155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.031 [2024-12-06 17:47:27.889762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-06 17:47:27.889792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.031 [2024-12-06 17:47:27.889801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.031 [2024-12-06 17:47:27.889970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.031 [2024-12-06 17:47:27.890125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.031 [2024-12-06 17:47:27.890132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.031 [2024-12-06 17:47:27.890137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.031 [2024-12-06 17:47:27.890143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.031 [2024-12-06 17:47:27.901884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.031 [2024-12-06 17:47:27.902314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-06 17:47:27.902329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.031 [2024-12-06 17:47:27.902335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.902488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.902644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.902652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.902657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.902661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:27.914531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.914988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.915000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.915006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.915158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.915310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.915315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.915320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.915325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:27.927238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.927747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.927781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.927790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.927961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.928116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.928123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.928128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.928134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:27.940006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.940446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.940476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.940485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.940661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.940817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.940824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.940829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.940835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:27.952716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.953191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.953222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.953231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.953402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.953557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.953565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.953570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.953576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 4926.33 IOPS, 19.24 MiB/s [2024-12-06T16:47:28.098Z] [2024-12-06 17:47:27.965449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.965953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.965967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.965973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.966130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.966282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.966288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.966293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.966298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:27.978166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.978662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.978678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.978683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.978837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.978990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.978996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.979001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.979006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:27.990880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:27.991356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:27.991385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:27.991394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:27.991563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:27.991725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:27.991732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:27.991737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:27.991743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:28.003619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:28.004236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:28.004267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:28.004275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:28.004444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:28.004599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:28.004609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:28.004615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:28.004620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:28.016356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:28.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:28.016967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:28.016976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:28.017144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:28.017300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:28.017306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:28.017312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:28.017317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:28.029058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:28.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:28.029569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:28.029578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:28.029754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:28.029910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:28.029916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:28.029922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:28.029928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:28.041805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:28.042322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:28.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.032 [2024-12-06 17:47:28.042360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.032 [2024-12-06 17:47:28.042529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.032 [2024-12-06 17:47:28.042689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.032 [2024-12-06 17:47:28.042696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.032 [2024-12-06 17:47:28.042702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.032 [2024-12-06 17:47:28.042712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.032 [2024-12-06 17:47:28.054584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.032 [2024-12-06 17:47:28.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-06 17:47:28.055179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.033 [2024-12-06 17:47:28.055188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.033 [2024-12-06 17:47:28.055356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.033 [2024-12-06 17:47:28.055519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.033 [2024-12-06 17:47:28.055526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.033 [2024-12-06 17:47:28.055531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.033 [2024-12-06 17:47:28.055537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.033 [2024-12-06 17:47:28.067265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.033 [2024-12-06 17:47:28.067767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-06 17:47:28.067783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.033 [2024-12-06 17:47:28.067789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.033 [2024-12-06 17:47:28.067942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.033 [2024-12-06 17:47:28.068095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.033 [2024-12-06 17:47:28.068100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.033 [2024-12-06 17:47:28.068105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.033 [2024-12-06 17:47:28.068110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.033 [2024-12-06 17:47:28.079982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.033 [2024-12-06 17:47:28.080533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-06 17:47:28.080563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.033 [2024-12-06 17:47:28.080572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.033 [2024-12-06 17:47:28.080751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.033 [2024-12-06 17:47:28.080908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.033 [2024-12-06 17:47:28.080914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.033 [2024-12-06 17:47:28.080919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.033 [2024-12-06 17:47:28.080925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.033 [2024-12-06 17:47:28.092653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.033 [2024-12-06 17:47:28.093005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-06 17:47:28.093024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.033 [2024-12-06 17:47:28.093030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.033 [2024-12-06 17:47:28.093183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.033 [2024-12-06 17:47:28.093335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.033 [2024-12-06 17:47:28.093341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.033 [2024-12-06 17:47:28.093345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.033 [2024-12-06 17:47:28.093350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.293 [2024-12-06 17:47:28.105366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.293 [2024-12-06 17:47:28.105844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.105857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.105862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.106014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.106167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.106172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.106177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.106182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.118044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.118538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.118550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.118555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.118711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.119024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.119033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.119038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.119043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.130792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.131140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.131154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.131159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.131315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.131467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.131472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.131477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.131482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.143494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.143981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.143994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.143999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.144151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.144303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.144309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.144314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.144318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.156192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.156742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.156772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.156780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.156949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.157105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.157111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.157116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.157122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.168862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.169444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.169474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.169483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.169655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.169811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.169821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.169826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.169832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.181565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.182193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.182223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.182231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.182400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.182556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.182562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.182567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.182573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.194306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.194755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.194770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.194776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.194929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.195081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.195086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.195092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.195096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.206972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.207502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.207532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.207542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.207716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.207872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.207878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.294 [2024-12-06 17:47:28.207884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.294 [2024-12-06 17:47:28.207894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.294 [2024-12-06 17:47:28.219639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.294 [2024-12-06 17:47:28.220219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.294 [2024-12-06 17:47:28.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.294 [2024-12-06 17:47:28.220258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.294 [2024-12-06 17:47:28.220426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.294 [2024-12-06 17:47:28.220581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.294 [2024-12-06 17:47:28.220588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.220594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.220599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.232344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.232980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.233011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.233020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.233188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.233344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.233350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.233356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.233361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.245100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.245568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.245583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.245589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.245745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.245898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.245904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.245909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.245913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.257795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.258279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.258296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.258301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.258454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.258606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.258612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.258617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.258622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.270494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.270955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.270969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.270974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.271126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.271278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.271284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.271289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.271293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.283168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.283741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.283771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.283780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.283950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.284105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.284111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.284117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.284123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.295855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.296433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.296463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.296472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.296650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.296806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.296812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.296817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.296823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.308559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.309079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.309094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.309100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.309253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.309405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.309411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.309416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.309420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.321299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.321793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.321806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.321811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.321963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.322116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.322121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.322126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.322131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.334031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.334525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.334539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.295 [2024-12-06 17:47:28.334544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.295 [2024-12-06 17:47:28.334701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.295 [2024-12-06 17:47:28.334854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.295 [2024-12-06 17:47:28.334863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.295 [2024-12-06 17:47:28.334868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.295 [2024-12-06 17:47:28.334873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.295 [2024-12-06 17:47:28.346743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.295 [2024-12-06 17:47:28.347325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.295 [2024-12-06 17:47:28.347355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.296 [2024-12-06 17:47:28.347363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.296 [2024-12-06 17:47:28.347532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.296 [2024-12-06 17:47:28.347694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.296 [2024-12-06 17:47:28.347701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.296 [2024-12-06 17:47:28.347706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.296 [2024-12-06 17:47:28.347712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.359462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.360051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.360082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.360090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.360259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.360414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.360421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.360427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.360433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.372180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.372631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.372650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.372656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.372809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.372962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.372968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.372972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.372985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.384880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.385458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.385488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.385497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.385670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.385826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.385833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.385839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.385844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.397586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.397938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.397953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.397959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.398111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.398264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.398270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.398275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.398279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.410302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.410751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.410765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.410770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.410923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.411075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.411082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.411087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.411091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.422967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.423459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.423477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.423482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.423634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.423792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.423798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.423804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.423809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.435689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.436293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-12-06 17:47:28.436323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.557 [2024-12-06 17:47:28.436332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.557 [2024-12-06 17:47:28.436500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.557 [2024-12-06 17:47:28.436664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.557 [2024-12-06 17:47:28.436671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.557 [2024-12-06 17:47:28.436677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.557 [2024-12-06 17:47:28.436683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.557 [2024-12-06 17:47:28.448419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.557 [2024-12-06 17:47:28.449043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.449073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.449081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.449252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.449407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.449414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.449420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.449426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 [2024-12-06 17:47:28.461176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.461717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.461733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.461739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.461895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.462048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.462053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.462058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.462063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 [2024-12-06 17:47:28.473943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.474439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.474452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.474457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.474610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.474766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.474772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.474777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.474782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.558 [2024-12-06 17:47:28.486663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.487135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.487148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.487154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.487306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.487458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.487467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.487475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.487481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 [2024-12-06 17:47:28.499362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.499908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.499938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.499950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.500118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.500275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.500281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.500286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.500292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 [2024-12-06 17:47:28.512041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.512431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.512462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.512472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.512650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.512806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.512813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.512819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.512824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.558 [2024-12-06 17:47:28.524715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.558 [2024-12-06 17:47:28.525226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.525241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.525247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.558 [2024-12-06 17:47:28.525402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.525557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.525563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.525568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.525573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 [2024-12-06 17:47:28.529916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.558 [2024-12-06 17:47:28.537446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.537904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.537917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.537923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.538075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.538227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.538232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.538237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.538242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 [2024-12-06 17:47:28.550165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.550741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.550771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.550780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.550951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.551107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.551113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.551119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.551125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 Malloc0 00:31:36.558 [2024-12-06 17:47:28.562874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.563246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.563261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.563267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.563420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.558 [2024-12-06 17:47:28.563573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.563579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.563585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.563590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.558 [2024-12-06 17:47:28.575618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.558 [2024-12-06 17:47:28.576130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.576160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.576169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.576337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.576493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.576500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.558 [2024-12-06 17:47:28.576505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.558 [2024-12-06 17:47:28.576511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.558 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.558 [2024-12-06 17:47:28.588265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.558 [2024-12-06 17:47:28.588876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-12-06 17:47:28.588906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1026c20 with addr=10.0.0.2, port=4420 00:31:36.558 [2024-12-06 17:47:28.588915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026c20 is same with the state(6) to be set 00:31:36.558 [2024-12-06 17:47:28.589084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026c20 (9): Bad file descriptor 00:31:36.558 [2024-12-06 17:47:28.589239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:36.558 [2024-12-06 17:47:28.589245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:36.559 [2024-12-06 17:47:28.589251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:36.559 [2024-12-06 17:47:28.589257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:36.559 [2024-12-06 17:47:28.594415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.559 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.559 17:47:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1732382 00:31:36.559 [2024-12-06 17:47:28.600995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:36.818 [2024-12-06 17:47:28.623534] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:38.021 4865.00 IOPS, 19.00 MiB/s [2024-12-06T16:47:31.030Z] 5867.38 IOPS, 22.92 MiB/s [2024-12-06T16:47:32.414Z] 6647.44 IOPS, 25.97 MiB/s [2024-12-06T16:47:33.012Z] 7268.10 IOPS, 28.39 MiB/s [2024-12-06T16:47:34.392Z] 7771.09 IOPS, 30.36 MiB/s [2024-12-06T16:47:35.341Z] 8177.17 IOPS, 31.94 MiB/s [2024-12-06T16:47:36.281Z] 8538.08 IOPS, 33.35 MiB/s [2024-12-06T16:47:37.223Z] 8846.50 IOPS, 34.56 MiB/s [2024-12-06T16:47:37.223Z] 9122.13 IOPS, 35.63 MiB/s 00:31:45.157 Latency(us) 00:31:45.157 [2024-12-06T16:47:37.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:45.157 Verification LBA range: start 0x0 length 0x4000 00:31:45.157 Nvme1n1 : 15.04 9096.22 35.53 13147.44 0.00 5721.21 559.79 48278.19 00:31:45.157 [2024-12-06T16:47:37.223Z] =================================================================================================================== 00:31:45.157 [2024-12-06T16:47:37.223Z] Total : 9096.22 35.53 13147.44 0.00 5721.21 559.79 48278.19 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.157 rmmod nvme_tcp 00:31:45.157 rmmod nvme_fabrics 00:31:45.157 rmmod nvme_keyring 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1732463 ']' 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1732463 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1732463 ']' 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1732463 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.157 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1732463 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1732463' 00:31:45.417 killing process with pid 1732463 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1732463 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1732463 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.417 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.418 17:47:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.959 17:47:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.959 00:31:47.959 real 0m28.134s 00:31:47.960 user 1m3.431s 00:31:47.960 sys 0m7.588s 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:47.960 ************************************ 00:31:47.960 END TEST nvmf_bdevperf 00:31:47.960 ************************************ 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.960 ************************************ 00:31:47.960 START TEST nvmf_target_disconnect 00:31:47.960 ************************************ 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:47.960 * Looking for test storage... 00:31:47.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:47.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.960 --rc genhtml_branch_coverage=1 00:31:47.960 --rc genhtml_function_coverage=1 00:31:47.960 --rc genhtml_legend=1 00:31:47.960 --rc geninfo_all_blocks=1 00:31:47.960 --rc geninfo_unexecuted_blocks=1 00:31:47.960 00:31:47.960 ' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:47.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.960 --rc genhtml_branch_coverage=1 00:31:47.960 --rc genhtml_function_coverage=1 00:31:47.960 --rc genhtml_legend=1 00:31:47.960 --rc geninfo_all_blocks=1 00:31:47.960 --rc geninfo_unexecuted_blocks=1 00:31:47.960 00:31:47.960 ' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:47.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.960 --rc genhtml_branch_coverage=1 00:31:47.960 --rc genhtml_function_coverage=1 00:31:47.960 --rc genhtml_legend=1 00:31:47.960 --rc geninfo_all_blocks=1 00:31:47.960 --rc geninfo_unexecuted_blocks=1 00:31:47.960 00:31:47.960 ' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:47.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.960 --rc genhtml_branch_coverage=1 00:31:47.960 --rc genhtml_function_coverage=1 00:31:47.960 --rc genhtml_legend=1 00:31:47.960 --rc geninfo_all_blocks=1 00:31:47.960 --rc geninfo_unexecuted_blocks=1 00:31:47.960 00:31:47.960 ' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:47.960 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:47.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.961 17:47:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.099 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:56.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:56.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:56.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:56.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.100 17:47:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:31:56.100 00:31:56.100 --- 10.0.0.2 ping statistics --- 00:31:56.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.100 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:31:56.100 00:31:56.100 --- 10.0.0.1 ping statistics --- 00:31:56.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.100 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:56.100 ************************************ 00:31:56.100 START TEST nvmf_target_disconnect_tc1 00:31:56.100 ************************************ 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:56.100 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.101 [2024-12-06 17:47:47.300025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:56.101 [2024-12-06 17:47:47.300128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14afae0 with addr=10.0.0.2, port=4420 00:31:56.101 [2024-12-06 17:47:47.300165] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:56.101 [2024-12-06 17:47:47.300185] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:56.101 [2024-12-06 17:47:47.300194] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:56.101 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:56.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:56.101 Initializing NVMe Controllers 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:56.101 00:31:56.101 real 0m0.153s 00:31:56.101 user 0m0.060s 00:31:56.101 sys 0m0.093s 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:56.101 ************************************ 00:31:56.101 END TEST nvmf_target_disconnect_tc1 00:31:56.101 ************************************ 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:56.101 ************************************ 00:31:56.101 START TEST nvmf_target_disconnect_tc2 00:31:56.101 ************************************ 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1735066 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1735066 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1735066 ']' 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.101 17:47:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.101 [2024-12-06 17:47:47.465328] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:31:56.101 [2024-12-06 17:47:47.465385] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.101 [2024-12-06 17:47:47.564101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:56.101 [2024-12-06 17:47:47.615800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.101 [2024-12-06 17:47:47.615852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.101 [2024-12-06 17:47:47.615861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.101 [2024-12-06 17:47:47.615868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.101 [2024-12-06 17:47:47.615874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.101 [2024-12-06 17:47:47.617861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:56.101 [2024-12-06 17:47:47.618025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:56.101 [2024-12-06 17:47:47.618185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:56.101 [2024-12-06 17:47:47.618186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.362 Malloc0 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.362 [2024-12-06 17:47:48.385491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.362 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.362 [2024-12-06 17:47:48.425890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1735099 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:56.623 17:47:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:58.547 17:47:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1735066 00:31:58.547 17:47:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 [2024-12-06 17:47:50.460561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Write completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.547 Read completed with error (sct=0, sc=8) 00:31:58.547 starting I/O failed 00:31:58.548 Write completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Write completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Write completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Write completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Write completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Write completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 Read completed with error (sct=0, sc=8) 00:31:58.548 starting I/O failed 00:31:58.548 [2024-12-06 17:47:50.460921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:58.548 [2024-12-06 17:47:50.461190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.461208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.461511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.461521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.462077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.462121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.462339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.462352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.462495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.462505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.462945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.462996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.463342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.463356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.463718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.463730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.464042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.464058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.464253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.464264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.464500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.464510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.464776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.464788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.464926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.464937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.465149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.465163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.465509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.465520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.465877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.465889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.466184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.466196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.466533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.466545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.466865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.466877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.548 qpair failed and we were unable to recover it. 00:31:58.548 [2024-12-06 17:47:50.467183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.548 [2024-12-06 17:47:50.467194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.467520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.467532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.467895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.467906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.468251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.468262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.468585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.468597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.468892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.468903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.469216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.469227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.469401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.469411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.469703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.469715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.470077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.470088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.470324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.470335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.470634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.470655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.470941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.470952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.471295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.471307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.471584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.471596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.471807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.472097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.472108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.472317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.472327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.472620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.472631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.472928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.472941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.473237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.473248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.473577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.473588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.473895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.473907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.474137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.474148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.474482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.474493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.474807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.474818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.549 [2024-12-06 17:47:50.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.549 [2024-12-06 17:47:50.475111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.549 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.475444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.475454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.475810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.475820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.476143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.476154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.476524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.476534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.476852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.476862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.477166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.477180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.477524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.477534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.477779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.477789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.478127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.478139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.478417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.478428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.478740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.478751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.479047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.479057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.479267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.479278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.479556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.479566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.479867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.479878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.480185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.480195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.480553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.480564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.480776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.480787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.481122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.481132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.481425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.481438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.481720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.481731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.482083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.482274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.482285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.482494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.482506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.482908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.482919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.483209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.550 [2024-12-06 17:47:50.483219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.550 qpair failed and we were unable to recover it. 00:31:58.550 [2024-12-06 17:47:50.483515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.483526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.483837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.483848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.484123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.484133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.484410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.484420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.484753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.484764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.485145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.485155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.485443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.485456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.485738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.485750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.486061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.486071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.486241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.486251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.486479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.486489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.486673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.486684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.487004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.487015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.487297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.487307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.487547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.487558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.487868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.487879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.488169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.488179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.488468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.488478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.488772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.488782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.489065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.489075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.489369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.489379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.489671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.489681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.489974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.489984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.490284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.490294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.490572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.490582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.490888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.551 [2024-12-06 17:47:50.490898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.551 qpair failed and we were unable to recover it. 00:31:58.551 [2024-12-06 17:47:50.491188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.491198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.491408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.491418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.491713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.491724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.492023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.492033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.492314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.492327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.492649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.492662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.492977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.492990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.493313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.493329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.493620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.493632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.493959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.493972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.494258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.494270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.494574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.494586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.494878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.494891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.495271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.495284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.495601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.495614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.495926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.495939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.496241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.496254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.496470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.496482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.496651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.496663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.496959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.496972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.497287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.497299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.497671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.497999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.498011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.498318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.498330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.498603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.498615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.552 [2024-12-06 17:47:50.498917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.552 [2024-12-06 17:47:50.498930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.552 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.499242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.499254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.499508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.499520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.499809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.499822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.500115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.500136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.500431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.500444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.500740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.500752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.501042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.501055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.501339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.501351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.501642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.501655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.501957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.501969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.502278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.502555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.502567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.502851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.502864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.503174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.503190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.503400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.503416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.503702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.503719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.504047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.504063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.504373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.504389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.504703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.504721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.505030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.505046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.505372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.505389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.505594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.505611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.505917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.505936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.506287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.506304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.506618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.506636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.553 [2024-12-06 17:47:50.506958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.553 [2024-12-06 17:47:50.506975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.553 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.507280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.507296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.507602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.507618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.507929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.507946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.508291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.508308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.508614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.508630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.508932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.508949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.509260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.509277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.509590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.509607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.509903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.509920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.510232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.510248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.510552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.510570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.510829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.510848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.511060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.511078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.511383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.511400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.511704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.511721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.512066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.512378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.512398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.512760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.512778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.513091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.513107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.513324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.513342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.513662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.513679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.513981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.513997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.514315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.514331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.514634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.554 [2024-12-06 17:47:50.514665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.554 qpair failed and we were unable to recover it. 00:31:58.554 [2024-12-06 17:47:50.514992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.515009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.515319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.515336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.515650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.515667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.515977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.515994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.516306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.516323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.516619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.516635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.516960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.516977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.517300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.517316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.517635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.517658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.517961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.517978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.518284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.518301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.518623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.518661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.518993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.519015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.519330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.519353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.519661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.519683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.520041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.520062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.520408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.520429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.520755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.520776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.521142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.521163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.521494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.521516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.521815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.521837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.522239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.522259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.522583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.522604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.522970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.522992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.523176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.523197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.523525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.523546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.523876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.523908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.524233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.524255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.524570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.524592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.524948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.524970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.525347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.525368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.525705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.525726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.526055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.526076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.526393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.526420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.526751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.526773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.527090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.527112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.527426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.527446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.527755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.527776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.555 [2024-12-06 17:47:50.528102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.555 [2024-12-06 17:47:50.528123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.555 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.528429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.528451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.528844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.528866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.529077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.529097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.529377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.529398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.529736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.529758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.530075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.530096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.530427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.530448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.530662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.530683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.531038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.531059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.531437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.531459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.531758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.531780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.532104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.532125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.532443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.532464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.532795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.532824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.533112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.533141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.533499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.533528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.533865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.533895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.534250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.534279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.534635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.534675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.535035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.535064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.535416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.535444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.535786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.535815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.536145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.536173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.536523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.536551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.536889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.536918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.537277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.537307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.537656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.537686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.537936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.537964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.538340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.538370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.538611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.538661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.538888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.538918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.539272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.539300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.539662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.539693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.540045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.540074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.540423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.540451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.540799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.540827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.541063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.541092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.556 [2024-12-06 17:47:50.541442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.556 [2024-12-06 17:47:50.541470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.556 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.541815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.541845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.542195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.542224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.542620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.542665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.543037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.543065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.543417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.543445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.543820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.543850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.544206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.544234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.544573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.544603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.544963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.544994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.545339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.545368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.545728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.545758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.546145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.546174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.546511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.546539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.546879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.546909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.547236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.547266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.547648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.547982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.548010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.548365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.548400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.548745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.548774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.549133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.549162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.549506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.549535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.549898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.549928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.550248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.550278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.550616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.550664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.550938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.550966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.551294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.551322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.551675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.551705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.552060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.552438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.552466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.552899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.552928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.553271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.553300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.553690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.553719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.554108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.554137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.554537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.554566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.554828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.554875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.555262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.555293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.555657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.555687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.556040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.557 [2024-12-06 17:47:50.556069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.557 qpair failed and we were unable to recover it. 00:31:58.557 [2024-12-06 17:47:50.556436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.556464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.556701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.556731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.557095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.557529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.557558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.557899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.557930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.558279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.558308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.558558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.558593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.558969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.558999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.559347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.559377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.559801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.559831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.560176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.560204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.560558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.560586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.561020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.561049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.561389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.561417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.561784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.562173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.562558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.562587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.562943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.562975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.563225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.563254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.563591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.563621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.563887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.563916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.564260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.564290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.564646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.564677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.565009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.565038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.565394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.565423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.565770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.565801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.566145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.566174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.566533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.566561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.566916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.566945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.567298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.567327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.567625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.567663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.568058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.568424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.568452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.568794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.558 [2024-12-06 17:47:50.568829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.558 qpair failed and we were unable to recover it. 00:31:58.558 [2024-12-06 17:47:50.569166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.569195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.569550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.569578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.569936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.569965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.570365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.570726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.570757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.571114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.571143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.571469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.571498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.571875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.571904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.572263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.572291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.572649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.572678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.573017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.573045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.573377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.573407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.573756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.573786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.574141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.574170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.574519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.574547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.574907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.574937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.575291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.575320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.575663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.575692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.576041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.576070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.576426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.576454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.576806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.576834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.577259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.577288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.577630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.577669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.578008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.578038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.578400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.578429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.578796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.578825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.579176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.579205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.579540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.579569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.579914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.579944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.580281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.580309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.580669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.580698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.580941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.580970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.581319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.581730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.582092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.582122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.582367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.582403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.582758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.582790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.583181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.583210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.559 qpair failed and we were unable to recover it. 00:31:58.559 [2024-12-06 17:47:50.583550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.559 [2024-12-06 17:47:50.583578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.583931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.583961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.584212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.584242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.584462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.584494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.584857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.584887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.585235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.585264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.585620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.585656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.585993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.586022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.586370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.586399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.586631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.586681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.587051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.587087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.587433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.587463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.587825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.587856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.588205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.588234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.588580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.588609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.588979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.589008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.589359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.589389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.589748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.589778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.590129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.590158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.590424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.590792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.590822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.591176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.591205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.591555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.591583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.591962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.591993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.592340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.592369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.592722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.592752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.593125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.593153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.593504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.593532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.593882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.594262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.594297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.594657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.594687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.595110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.595140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.595490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.595519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.595891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.595920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.596272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.596301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.596540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.596572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.596922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.596953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.597180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.597209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.597589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.560 [2024-12-06 17:47:50.597619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.560 qpair failed and we were unable to recover it. 00:31:58.560 [2024-12-06 17:47:50.597978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.598009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.598357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.598386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.598748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.598779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.599126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.599155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.599505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.599533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.599892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.599923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.600279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.600309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.600660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.600689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.601069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.601098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.601492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.601521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.601872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.601902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.602136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.602169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.602517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.602546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.602914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.602943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.603312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.603340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.603664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.603694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.604058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.604087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.561 [2024-12-06 17:47:50.604444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.561 [2024-12-06 17:47:50.604484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.561 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.604819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.604852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.605199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.605228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.605600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.605628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.606056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.606087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.606443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.606473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.606705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.606735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.607103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.607132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.607474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.607503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.607875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.607905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.608254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.608283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.608617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.608655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.609054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.609083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.609422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.609453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.609786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.609817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.610178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.610208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.610560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.610589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.610983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.611014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.611369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.611398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.611757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.611786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.612130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.612159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.612521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.612550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.612902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.612931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.613182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.613211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.613578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.613608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.613982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.614012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.614373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.614403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.614800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.614831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.615184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.615213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.615565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.615594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.615946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.615976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.616333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.616361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.616730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.617097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.617126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.617477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.834 [2024-12-06 17:47:50.617506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.834 qpair failed and we were unable to recover it. 00:31:58.834 [2024-12-06 17:47:50.617857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.617887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.618252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.618281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.618621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.618691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.619041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.619070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.619434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.619462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.619797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.619836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.620198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.620228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.620591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.620620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.620978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.621007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.621334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.621362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.621658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.621687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.622033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.622062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.622413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.622441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.622828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.623188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.623217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.623574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.623603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.623998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.624343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.624372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.624732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.624762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.625107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.625137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.625473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.625503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.625890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.625920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.626270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.626298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.626670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.627060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.627089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.627422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.627805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.627835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.628183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.628212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.628579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.628607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.628971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.629001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.629331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.629360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.629720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.629751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.630144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.630172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.630499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.630534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.630886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.630916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.631268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.631297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.631662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.631692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.632116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.632145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.835 [2024-12-06 17:47:50.632505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.835 [2024-12-06 17:47:50.632534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.835 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.632891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.633223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.633251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.633601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.633630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.633995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.634025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.634390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.634419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.634771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.634802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.635152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.635180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.635517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.635546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.635902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.635933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.636282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.636312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.636664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.636694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.637066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.637096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.637438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.637467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.637813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.637842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.638201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.638231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.638580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.638610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.638956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.638986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.639342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.639370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.639721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.639751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.640098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.640127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.640481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.640509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.640879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.640923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.641263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.641293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.641657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.641686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.642039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.642068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.642317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.642346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.642672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.642701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.643061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.643090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.643426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.643456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.643732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.643763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.644136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.644164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.644423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.644452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.644799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.644829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.645194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.645224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.645551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.645580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.646307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.646336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.646694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.836 [2024-12-06 17:47:50.646724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.836 qpair failed and we were unable to recover it. 00:31:58.836 [2024-12-06 17:47:50.647077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.647107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.647486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.647516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.647870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.647900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.648273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.648302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.648657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.648686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.649050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.649079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.649437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.649465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.649817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.649847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.650209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.650238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.650601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.650630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.650993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.651027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.651278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.651308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.651660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.651691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.652028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.652057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.652412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.652441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.652798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.652828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.653172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.653201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.653558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.653587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.653877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.653906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.654250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.654279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.654659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.654689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.655035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.655064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.655401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.655430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.655792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.655822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.656177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.656207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.656581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.656610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.656988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.657018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.657358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.657388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.657746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.657777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.658137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.658165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.658422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.658450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.658719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.658750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.659173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.659202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.659537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.659565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.660004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.660034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.660380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.660408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.660763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.660793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.661159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.661188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.837 [2024-12-06 17:47:50.661542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.837 [2024-12-06 17:47:50.661571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.837 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.661926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.661957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.662311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.662339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.662709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.662739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.663129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.663158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.663509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.663868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.663897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.664252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.664281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.664631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.664668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.665015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.665044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.665406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.665435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.665788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.665818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.666175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.666204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.666534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.666566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.666901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.666931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.667289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.667318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.667690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.667719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.668071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.668100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.668424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.668454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.668811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.668840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.669215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.669243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.669606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.669635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.669989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.670018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.670360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.670390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.670746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.670776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.671079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.671108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.671441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.671470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.671818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.671848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.672190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.672219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.672581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.672611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.672970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.672999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.673233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.673261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.673538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.673568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.673904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.838 [2024-12-06 17:47:50.673934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.838 qpair failed and we were unable to recover it. 00:31:58.838 [2024-12-06 17:47:50.674283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.674313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.674667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.674697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.675049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.675078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.675431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.675460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.675814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.675844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.676198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.676227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.676509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.676542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.676800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.676829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.677146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.677175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.677569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.677598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.677929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.677959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.678312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.678341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.678573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.678602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.678982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.679012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.679368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.679397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.679757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.679795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.680130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.680159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.680521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.680550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.680922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.680952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.681291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.681320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.681681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.681711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.682088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.682119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.682585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.682614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.682992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.683023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.683400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.683429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.683780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.683811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.684160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.684190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.684553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.684582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.684842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.684872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.685220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.685249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.685621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.685658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.686003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.686031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.686369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.686398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.686746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.686783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.687028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.687057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.687488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.687517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.687870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.687901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.688262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.688290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.839 qpair failed and we were unable to recover it. 00:31:58.839 [2024-12-06 17:47:50.688652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.839 [2024-12-06 17:47:50.688682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.688973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.689311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.689340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.689699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.689729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.690084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.690113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.690397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.690821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.690850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.691183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.691213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.691554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.691583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.691941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.691972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.692294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.692322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.692680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.692711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.693087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.693117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.693476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.693505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.693872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.693902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.694151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.694180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.694520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.694549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.694937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.694967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.695220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.695249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.695609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.695646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.696002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.696032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.696388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.696417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.696703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.696732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.697152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.697182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.697467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.697496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.697878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.697908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.698249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.698277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.698658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.698689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.699045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.699075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.699474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.699826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.699855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.700213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.700242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.700595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.700625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.701001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.701031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.701460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.701489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.701833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.701864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.702219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.702248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.702631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.703011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.840 [2024-12-06 17:47:50.703040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.840 qpair failed and we were unable to recover it. 00:31:58.840 [2024-12-06 17:47:50.703397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.703426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.703688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.703719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.704061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.704090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.704440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.704469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.704832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.704861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.705278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.705307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.705707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.705737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.706078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.706116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.706461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.706491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.706830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.706860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.707217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.707247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.707498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.707527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.707925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.707955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.708307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.708337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.708691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.708720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.709148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.709177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.709522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.709550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.709771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.709802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.710191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.710220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.710576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.710606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.710938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.710970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.711312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.711341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.711709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.712060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.712090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.712460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.712496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.712742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.713007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.713038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.713413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.713443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.713806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.713836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.714187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.714217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.714577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.714607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.715031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.715062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.715435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.715465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.715814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.715844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.716205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.716235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.716602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.716631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.716975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.717005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.717365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.717394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.841 qpair failed and we were unable to recover it. 00:31:58.841 [2024-12-06 17:47:50.717762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.841 [2024-12-06 17:47:50.717792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.718057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.718087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.718390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.718420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.718860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.718891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.719223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.719252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.719630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.719669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.720000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.720030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.720434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.720792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.720823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.721183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.721211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.721554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.721584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.721947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.721978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.722221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.722250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.722533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.722569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.722985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.723016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.723355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.723383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.723764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.723794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.724154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.724183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.724544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.724571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.724923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.724953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.725304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.725762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.725793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.726149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.726178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.726515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.726545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.726899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.726930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.727290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.727320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.727684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.727714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.728071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.728100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.728452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.728481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.728725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.728758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.729100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.729129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.729491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.729520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.729766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.729797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.730179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.730208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.730569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.730599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.730970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.731000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.731389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.731654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.731685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.842 [2024-12-06 17:47:50.731901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.842 [2024-12-06 17:47:50.731930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.842 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.732190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.732223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.732600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.732636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.733000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.733030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.733379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.733407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.733760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.733790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.734024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.734056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.734423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.734453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.734804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.734836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.735191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.735221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.735452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.735481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.735863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.735894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.736252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.736282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.736594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.736624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.736903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.736932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.737274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.737304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.737685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.737716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.738078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.738106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.738466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.738495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.738854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.738884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.739115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.739144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.739484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.739514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.739862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.739892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.740328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.740357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.740716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.740747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.740967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.740997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.741361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.741390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.741749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.741779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.742072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.742102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.742363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.742392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.742764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.742796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.743156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.743186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.743549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.743577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.743810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.743840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.744197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.843 [2024-12-06 17:47:50.744227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.843 qpair failed and we were unable to recover it. 00:31:58.843 [2024-12-06 17:47:50.744464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.744494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.744821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.744851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.745183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.745568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.745597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.745934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.745965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.746325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.746353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.746773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.746806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.747094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.747123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.747479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.747509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.747884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.747914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.748162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.748191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.748555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.748915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.748945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.749188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.749220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.749579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.749609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.749877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.750216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.750245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.750608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.750661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.750886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.750919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.751318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.751347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.751693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.751723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.752101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.752131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.752510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.752540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.752904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.752934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.753312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.753342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.753576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.753605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.754004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.754035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.754404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.754434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.754804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.754834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.755202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.755231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.755603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.755632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.755884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.755914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.756284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.756313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.756697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.757027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.757056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.757420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.757460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.757708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.757739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.758094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.758124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.844 [2024-12-06 17:47:50.758483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.844 [2024-12-06 17:47:50.758512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.844 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.758870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.758900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.759315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.759344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.759675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.759707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.759932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.759961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.760219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.760251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.760608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.760647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.760982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.761013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.761274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.761302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.761707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.761738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.762163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.762193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.762567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.762972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.763002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.763367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.763397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.763799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.764046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.764076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.764305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.764338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.764573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.764602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.764965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.764995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.765360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.765389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.765831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.765863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.766190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.766220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.766581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.766612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.767061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.767090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.767316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.767354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.767757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.767788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.768160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.768189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.768632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.768671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.769086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.769467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.769496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.769887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.769917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.770271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.770301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.770558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.770587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.771012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.771043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.771337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.771365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.771746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.771776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.772119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.772149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.772385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.772413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.772795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.845 [2024-12-06 17:47:50.772826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.845 qpair failed and we were unable to recover it. 00:31:58.845 [2024-12-06 17:47:50.773192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.773222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.773587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.773615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.774063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.774093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.774444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.774474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.774816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.775213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.775242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.775506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.775534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.775860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.775890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.776241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.776272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.776526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.776556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.776900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.776931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.777294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.777324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.777553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.777583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.777955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.777987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.778350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.778381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.778621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.778661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.779089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.779118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.779487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.779516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.779884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.779913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.780255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.780284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.780658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.780690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.781075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.781105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.781478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.781506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.781740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.781770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.782135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.782164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.782547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.782576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.782953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.782984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.783255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.783284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.783619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.783661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.784003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.784033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.784383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.784412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.784783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.784814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.785165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.785196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.785574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.785604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.785991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.786021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.786387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.786416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.786802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.786833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.786953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.786981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.846 [2024-12-06 17:47:50.787338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.846 [2024-12-06 17:47:50.787367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.846 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.787751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.787783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.788169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.788198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.788538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.788567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.788936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.788967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.789316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.789346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.789717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.789747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.790098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.790127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.790344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.790373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.790623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.790660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.791032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.791424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.791452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.791818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.791849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.792226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.792255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.792593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.792623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.792869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.792905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.793277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.793306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.793666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.793697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.794044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.794074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.794441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.794470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.794734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.794769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.795142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.795172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.795542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.795573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.795921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.795952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.796305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.796336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.796697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.796727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.797084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.797114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.797491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.797520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.797875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.797906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.798270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.798301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.798657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.798687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.799090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.799119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.799473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.799502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.799865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.799896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.800248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.847 [2024-12-06 17:47:50.800277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.847 qpair failed and we were unable to recover it. 00:31:58.847 [2024-12-06 17:47:50.800636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.800675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.801033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.801062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.801424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.801453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.801813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.801844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.802207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.802236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.802602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.802631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.803014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.803046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.803401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.803437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.803810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.803842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.804214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.804243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.804575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.804605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.805005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.805036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.805334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.805364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.805732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.805763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.806111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.806141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.806487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.806516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.806864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.806896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.807268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.807297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.807645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.807675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.808076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.808106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.808471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.808501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.808742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.808772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.809151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.809181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.809542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.809571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.809934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.809964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.810305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.810334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.810652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.810682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.811020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.811049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.811422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.811452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.811820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.811849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.812137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.812166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.812519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.812548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.812901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.812931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.813290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.813320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.813684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.813721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.814097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.814126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.814480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.814508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.814862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.814894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.848 qpair failed and we were unable to recover it. 00:31:58.848 [2024-12-06 17:47:50.815135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.848 [2024-12-06 17:47:50.815164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.815505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.815535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.815888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.815919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.816272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.816300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.816668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.816698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.817085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.817114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.817477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.817506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.817862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.817892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.818274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.818303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.818653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.818690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.819097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.819127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.819483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.819512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.819873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.819905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.820276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.820306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.820577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.820606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.820996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.821026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.821358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.821389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.821737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.821768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.822106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.822136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.822502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.822532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.822874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.822904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.823262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.823292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.823633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.823671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.824041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.824070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.824435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.824465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.824712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.824742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.825114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.825143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.825503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.825532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.825797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.825828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.826209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.826239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.826499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.826528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.826894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.826924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.827284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.827314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.827676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.827706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.828078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.828108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.828358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.828388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.828751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.828781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.829148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.829184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.829561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.849 [2024-12-06 17:47:50.829591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.849 qpair failed and we were unable to recover it. 00:31:58.849 [2024-12-06 17:47:50.829955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.829986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.830430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.830459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.830817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.830848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.831209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.831238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.831599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.831628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.831995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.832026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.832387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.832416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.832766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.832796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.833146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.833175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.833537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.833566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.833932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.833961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.834341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.834370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.834729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.834761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.835120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.835150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.835519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.835548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.835911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.835941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.836280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.836309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.836588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.836617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.836999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.837029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.837387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.837416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.837774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.837804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.838145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.838176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.838541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.838569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.838936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.838966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.839323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.839352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.839725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.839761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.840128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.840157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.840541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.840892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.840923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.841263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.841293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.841659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.841689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.842083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.842112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.842473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.842502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.842865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.842896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.843232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.843263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.843647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.843677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.844022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.844051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.844425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.844454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.850 [2024-12-06 17:47:50.844815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.850 [2024-12-06 17:47:50.844846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.850 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.845215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.845244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.845600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.845629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.845985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.846015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.846378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.846408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.846762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.847130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.847159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.847530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.847559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.847906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.847938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.848295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.848324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.848693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.848723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.848999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.849029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.849324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.849353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.849582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.849611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.849956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.849991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.850350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.850380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.850740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.850771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.851148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.851177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.851538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.851567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.851990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.852020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.852374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.852404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.852752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.852783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.853141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.853172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.853540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.853569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.853933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.853963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.854316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.854346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.854711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.854741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.855095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.855126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.855489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.855519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.855882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.855913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.856268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.856297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.856542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.856574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.856924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.856956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.857307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.857343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.857675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.857704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.858046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.858077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.858445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.858474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.858844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.858874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.859235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.851 [2024-12-06 17:47:50.859264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.851 qpair failed and we were unable to recover it. 00:31:58.851 [2024-12-06 17:47:50.859676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.859706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.860042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.860070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.860452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.860481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.860840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.860871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.861226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.861255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.861619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.861657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.862084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.862114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.862406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.862435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.862789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.862819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.863066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.863098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.863337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.863369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.863788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.863818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.864170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.864198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.864598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.864628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.864998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.865027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.865393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.865424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.865797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.865828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.866166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.866196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.866527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.866555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.866905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.866937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.867292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.867321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.867559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.867591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.867840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.867873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.868239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.868268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.868616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.868652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.869029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.869059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.869360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.869389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.869754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.869784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.870235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.870266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.870621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.870661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.871019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.871049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.871412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.871442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.871810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.871842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.872211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.872240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.852 [2024-12-06 17:47:50.872680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.852 [2024-12-06 17:47:50.872711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.852 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.873060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.873090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.873460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.873491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.873818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.873848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.874087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.874119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.874555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.874585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.874930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.874961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.875301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.875330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.875698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.875729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.876097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.876132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.876486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.876516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.876861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.876892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.877263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.877292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.877658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.877689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.878085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.878114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.878441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.878472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.878849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.878879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.879237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.879267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.879628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.879680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.880020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.880049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.880484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.880514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.880879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.880910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.881275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.881304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.881670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.881701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.882054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.882084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.882444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.882473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.882839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.882869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.883222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.883253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.883596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.883625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.883998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.884028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.884420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.884451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.884797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.884827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.885166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.885196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.885560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.885590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.885952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.885982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.886347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.886376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.886762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.886798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.887153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.887182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.887540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.853 [2024-12-06 17:47:50.887568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.853 qpair failed and we were unable to recover it. 00:31:58.853 [2024-12-06 17:47:50.887823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.854 [2024-12-06 17:47:50.887853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:58.854 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.888212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.888244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.888602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.888633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.888995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.889025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.889373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.889402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.889760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.889790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.890153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.890182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.890545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.890574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.890931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.890962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.891328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.891358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.891720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.891751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.126 qpair failed and we were unable to recover it. 00:31:59.126 [2024-12-06 17:47:50.892136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.126 [2024-12-06 17:47:50.892166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.892539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.892568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.892867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.892897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.893238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.893267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.893635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.894028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.894057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.894321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.894357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.894716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.894748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.895102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.895133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.895503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.895532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.895890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.895922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.896287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.896317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.896671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.896701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.897046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.897440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.897470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.897833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.897864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.898237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.898266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.898607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.898636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.899000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.899029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.899388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.899417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.899777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.899809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.900176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.900205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.900561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.900590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.900882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.900913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.901291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.901320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.901682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.901712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.902106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.902136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.902489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.902520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.902875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.902905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.903267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.903662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.903693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.904074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.904103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.904550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.904579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.904969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.905325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.905695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.905725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.906082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.906114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.906548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.906578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.127 qpair failed and we were unable to recover it. 00:31:59.127 [2024-12-06 17:47:50.907046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.127 [2024-12-06 17:47:50.907075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.907437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.907467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.907841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.907874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.908236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.908267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.908682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.908712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.909084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.909429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.909459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.909696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.909725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.910004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.910033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.910412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.910442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.910814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.910844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.911186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.911216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.911437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.911470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.911815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.911846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.912210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.912249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.912591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.912621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.912979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.913023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.913472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.913504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.913864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.913896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.914253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.914282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.914654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.914686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.915076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.915443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.915474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.915818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.915850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.916221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.916249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.916612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.916650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.917021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.917052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.917388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.917416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.917786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.917817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.918159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.918190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.918561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.918591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.918957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.918988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.919356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.919386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.919770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.919800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.920239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.920270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.920509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.920539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.920866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.920896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.921296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.128 [2024-12-06 17:47:50.921674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.128 [2024-12-06 17:47:50.921705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.128 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.922087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.922121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.922474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.922503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.922896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.923229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.923259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.923628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.923675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.924025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.924055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.924408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.924437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.924797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.924828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.925200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.925229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.925585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.925614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.925985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.926014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.926432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.926462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.926766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.926795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.927138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.927166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.927527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.927556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.927927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.927956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.928307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.928336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.928703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.928734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.929094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.929122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.929468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.929498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.929867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.929898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.930261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.930291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.930546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.930575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.930814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.930843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.931199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.931229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.931589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.931620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.931980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.932009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.932373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.932403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.932742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.932773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.933113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.933143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.933474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.933504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.933862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.933899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.934256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.934663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.934693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.935054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.935083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.935436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.935465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.935826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.935857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.936221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.129 [2024-12-06 17:47:50.936251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.129 qpair failed and we were unable to recover it. 00:31:59.129 [2024-12-06 17:47:50.936508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.936537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.936926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.937286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.937316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.937681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.937711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.938091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.938120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.938481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.938509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.938960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.938990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.939360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.939390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.939763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.939792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.940158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.940188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.940627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.940669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.941015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.941044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.941402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.941431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.941802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.941832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.942194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.942223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.942587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.942615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.942989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.943019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.943392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.943421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.943669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.943699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.943938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.943967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.944326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.944354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.944715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.944745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.945144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.945516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.945555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.945906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.945936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.946332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.946361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.946722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.946754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.947102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.947131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.947504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.947533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.947899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.947929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.948295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.948325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.948685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.948716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.949088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.949117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.949461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.949490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.949847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.949879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.950242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.950271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.950629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.950666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.951026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.951056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.130 qpair failed and we were unable to recover it. 00:31:59.130 [2024-12-06 17:47:50.951430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.130 [2024-12-06 17:47:50.951459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.951844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.951874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.952251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.952279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.952657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.952687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.952940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.952969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.953340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.953369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.953630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.953683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.954046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.954076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.954442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.954471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.954838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.954870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.955276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.955305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.955663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.955692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.956099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.956128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.956503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.956532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.956894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.956925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.957289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.957319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.957669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.957699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.958053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.958083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.958476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.958835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.958866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.959233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.959262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.959525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.959554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.959904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.959934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.960293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.960328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.960683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.960713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.961070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.961099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.961511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.961542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.961889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.961919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.962210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.962240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.962602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.962632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.963021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.963050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.963426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.963454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.131 [2024-12-06 17:47:50.963842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.131 qpair failed and we were unable to recover it. 00:31:59.131 [2024-12-06 17:47:50.964203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.964232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.964485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.964513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.964821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.964851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.965205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.965234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.965593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.965622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.965979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.966009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.966358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.966387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.966756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.966786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.967199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.967228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.967616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.967656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.968021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.968051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.968418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.968447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.968699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.968729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.969076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.969105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.969476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.969505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.969784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.969814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.970197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.970226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.970591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.970626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.970985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.971014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.971255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.971285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.971653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.972061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.972090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.972429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.972459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.972809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.972839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.973246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.973275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.973621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.973658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.974033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.974062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.974447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.974478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.974855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.974884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.975157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.975186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.975587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.975616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.975967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.975996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.976354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.976382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.976722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.976753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.977114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.977144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.977525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.977555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.977935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.978289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.978319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.132 qpair failed and we were unable to recover it. 00:31:59.132 [2024-12-06 17:47:50.978681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.132 [2024-12-06 17:47:50.978712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.979079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.979107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.979366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.979396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.979658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.979689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.979968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.979998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.980352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.980382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.980816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.980853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.981207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.981243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.981598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.981627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.982005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.982035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.982326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.982356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.982716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.982745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.983082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.983111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.983477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.983506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.983869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.983899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.984264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.984293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.984649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.984679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.985036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.985064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.985422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.985450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.985818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.985849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.986204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.986234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.986620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.986657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.987029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.987059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.987325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.987354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.987741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.987771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.988143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.988172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.988351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.988380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.988796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.988825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.989117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.989146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.989517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.989546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.989893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.989924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.990251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.990280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.990535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.990564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.990967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.990997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.991343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.991373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.991736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.991767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.992138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.992541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.133 [2024-12-06 17:47:50.992569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.133 qpair failed and we were unable to recover it. 00:31:59.133 [2024-12-06 17:47:50.992951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.992980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.993353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.993382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.993633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.993672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.993973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.994002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.994365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.994394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.994618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.994656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.994911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.994940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.995304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.995332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.995698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.995729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.996149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.996184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.996558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.996587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.996825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.996855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.997224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.997253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.997623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.997659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.998034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.998064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.998314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.998344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.998588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.998616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.998979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.999009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.999227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.999257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:50.999616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:50.999654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.000020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.000050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.000417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.000446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.000806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.000837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.001229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.001260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.001625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.001681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.002050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.002079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.002445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.002474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.002863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.002899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.003239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.003270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.003654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.003684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.004086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.004115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.004562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.004591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.005025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.005055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.005425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.005455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.005816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.005846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.006074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.006103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.006458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.006494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.006868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.134 [2024-12-06 17:47:51.006899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.134 qpair failed and we were unable to recover it. 00:31:59.134 [2024-12-06 17:47:51.007168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.007196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.007578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.007607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.007949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.007978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.008234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.008264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.008634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.008864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.008894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.009270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.009298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.009554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.009582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.009934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.009965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.010405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.010434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.010809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.010841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.011184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.011213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.011577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.011606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.012053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.012083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.012519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.012548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.012912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.012942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.013301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.013329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.013696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.013728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.014087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.014116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.014284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.014312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.014680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.014710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.015113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.015143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.015501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.015530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.015878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.015909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.016274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.016304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.016668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.016703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.017048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.017079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.017416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.017445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.017814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.017846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.018217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.018247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.018612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.018976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.019004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.019378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.019407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.019780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.019810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.135 [2024-12-06 17:47:51.020178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.135 [2024-12-06 17:47:51.020207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.135 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.020576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.020605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.020980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.021010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.021400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.021429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.021788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.021819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.022173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.022202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.022564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.022593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.022956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.022986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.023369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.023398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.023746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.023777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.024165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.024194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.024600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.024629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.024986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.025016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.025360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.025388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.025752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.025783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.026120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.026149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.026518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.026546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.026894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.026925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.027289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.027318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.027685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.027716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.028064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.028093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.028463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.028492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.028761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.028792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.029180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.029548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.029577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.029932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.029961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.030323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.030352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.030715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.030746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.031082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.031111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.031432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.031462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.031804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.031835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.032194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.032223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.032584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.032613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.032955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.032985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.033356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.033385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.033715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.033745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.034101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.034130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.034481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.136 [2024-12-06 17:47:51.034510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.136 qpair failed and we were unable to recover it. 00:31:59.136 [2024-12-06 17:47:51.034788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.034818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.035155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.035184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.035560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.035590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.035981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.036011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.036378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.036407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.036766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.036797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.037161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.037191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.037551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.037580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.037944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.037975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.038324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.038353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.038724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.038755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.039128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.039156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.039524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.039553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.039898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.039928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.040281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.040310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.040674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.040707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.040970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.040999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.041362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.041391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.041754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.041784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.042140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.042168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.042417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.042446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.042802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.042838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.043099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.043128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.043475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.043504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.043865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.043897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.044255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.044284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.044655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.044685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.044931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.044960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.045343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.045372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.045759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.045790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.046155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.046183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.046542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.046571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.046935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.046965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.047349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.047377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.047733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.047763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.048132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.048162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.048516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.048545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.048908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.048937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.137 qpair failed and we were unable to recover it. 00:31:59.137 [2024-12-06 17:47:51.049286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.137 [2024-12-06 17:47:51.049316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.049678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.049709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.050095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.050124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.050489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.050521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.050890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.050920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.051298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.051327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.051701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.051732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.051993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.052022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.052375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.052779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.052810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.053212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.053247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.053585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.053615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.053962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.053992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.054244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.054272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.054526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.054555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.054921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.054951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.055293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.055323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.055671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.055701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.056097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.056127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.056487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.056516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.056892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.056922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.057295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.057324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.057682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.057713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.058074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.058102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.058506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.058872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.058903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.059277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.059307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.059670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.059700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.060039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.060069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.060406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.060435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.060795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.060826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.061113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.061142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.061390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.061418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.061767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.061797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.062204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.062232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.062613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.062651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.063019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.063049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.063392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.063420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.138 [2024-12-06 17:47:51.063780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.138 [2024-12-06 17:47:51.063812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.138 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.064183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.064211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.064594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.064988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.065017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.065376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.065404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.065658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.065689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.066056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.066085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.066421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.066451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.066817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.066849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.067219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.067248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.067598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.067626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.068002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.068031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.068377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.068408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.068769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.068800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.069161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.069191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.069608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.069645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.069961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.069989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.070224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.070253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.070609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.070646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.071011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.071040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.071381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.071411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.071808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.072171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.072200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.072571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.072599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.072959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.072989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.073341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.073370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.073852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.073883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.074250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.074279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.074665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.074695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.075045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.075074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.075425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.075455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.075807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.075836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.076206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.076234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.076604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.076634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.077000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.077029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.077362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.077391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.077745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.077775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.078144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.078173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.078519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.139 [2024-12-06 17:47:51.078548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.139 qpair failed and we were unable to recover it. 00:31:59.139 [2024-12-06 17:47:51.078905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.078935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.079277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.079312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.079665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.079695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.080045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.080076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.080419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.080447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.080807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.080837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.081211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.081240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.081597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.081626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.081968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.082000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.082348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.082380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.082733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.082765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.083129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.083160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.083525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.083558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.083922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.083954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.084313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.084344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.084592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.084622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.084991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.085022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.085379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.085411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.085768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.085799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.086155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.086186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.086542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.086574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.086932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.086965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.087314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.087347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.087709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.087741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.088105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.088136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.088494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.088527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.088876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.088907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.089260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.089291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.089657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.089694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.090044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.090075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.090434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.090464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.090810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.090842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.091202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.091232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.091592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.091624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.092018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.140 [2024-12-06 17:47:51.092050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.140 qpair failed and we were unable to recover it. 00:31:59.140 [2024-12-06 17:47:51.092406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.092436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.092793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.092825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.093178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.093210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.093566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.093596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.093964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.093996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.094353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.094729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.094762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.095152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.095183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.095568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.095923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.095955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.096316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.096347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.096698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.096729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.097101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.097132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.097456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.097488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.097853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.097885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.098151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.098556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.098587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.098976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.099008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.099366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.099397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.099758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.099790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.100152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.100189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.100551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.100584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.100946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.100978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.101338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.101370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.101728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.101762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.102122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.102153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.102525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.102556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.102922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.102954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.103313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.103343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.103697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.103730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.104096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.104127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.104495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.104525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.104880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.104911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.105274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.105305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.105596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.105626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.106042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.106073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.106431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.106462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.106818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.141 [2024-12-06 17:47:51.106849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.141 qpair failed and we were unable to recover it. 00:31:59.141 [2024-12-06 17:47:51.107207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.107238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.107594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.107627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.108020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.108052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.108410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.108443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.108801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.108833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.109200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.109232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.109588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.109621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.109986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.110379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.110410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.110808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.110841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.111229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.111259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.111619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.111671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.112018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.112050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.112411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.112443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.112800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.112834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.113190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.113221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.113575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.113607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.113980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.114013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.114367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.114398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.114789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.114823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.115178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.115208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.115577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.115608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.116005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.116037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.116393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.116425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.116777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.116809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.117171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.117202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.117562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.117594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.117951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.117982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.118344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.118375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.119108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.119547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.119585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.119945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.119979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.120365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.120716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.120750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.121108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.121141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.121498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.121531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.121893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.121925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.122287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.122320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.142 qpair failed and we were unable to recover it. 00:31:59.142 [2024-12-06 17:47:51.122672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.142 [2024-12-06 17:47:51.122703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.122987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.123018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.123365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.123756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.123791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.124152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.124185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.124541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.124572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.124942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.124975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.125332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.125365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.125723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.125756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.126009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.126044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.126429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.126462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.126819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.126853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.127212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.127250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.127595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.127628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.127991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.128416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.128783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.128818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.129211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.129242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.129615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.129655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.129886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.129920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.130273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.130304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.130703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.130961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.130996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.131343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.131375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.131744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.131775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.132022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.132316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.132347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.132697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.132729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.133085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.133119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.133350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.133385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.133627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.134062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.134094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.134456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.134489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.134863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.134897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.135229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.135262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.135688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.135721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.136073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.136105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.136460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.136491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.143 [2024-12-06 17:47:51.136858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.143 [2024-12-06 17:47:51.136891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.143 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.137243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.137282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.137649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.137682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.137918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.137949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.138342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.138700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.138735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.139099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.139131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.139563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.139594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.139956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.139988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.140345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.140379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.140737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.140769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.141134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.141166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.141533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.141565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.141920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.141953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.142309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.142340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.142695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.142727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.143135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.143166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.143523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.143555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.143920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.143954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.144300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.144331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.144692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.144725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.145096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.145128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.145487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.145519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.145890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.145925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.146318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.146349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.146728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.146761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.146998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.147032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.147284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.147317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.147672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.147704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.148105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.148137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.148381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.148412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.148771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.148828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.149246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.149279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.149652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.149685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.150086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.150118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.150524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.150889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.150923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.151274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.144 [2024-12-06 17:47:51.151306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-12-06 17:47:51.151661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.151693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.152048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.152081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.152448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.152479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.152901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.152934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.153169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.153200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.153556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.153587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.153951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.153985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.154336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.154367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.154716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.154747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.155117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.155146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.155507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.155538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.155899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.156288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.156319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.156673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.156706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.157052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.157085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.157439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.157470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.157848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.157879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.158274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.158633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.158687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.159066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.159099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.159463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.159494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.159878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.159910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.160334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.160365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.160721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.160755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.161154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.161186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.161431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.161465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.161831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.161863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.162226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.162258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.162621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.162668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.163014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.163047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.163397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.163429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.163794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.163832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.164257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.164289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.164625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.164669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.164927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.145 [2024-12-06 17:47:51.164961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-12-06 17:47:51.165313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.165344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.165703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.165736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.166093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.166124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.166484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.166515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.166887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.166921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.167276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.167307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.167696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.167729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.168083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.168116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.168460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.168492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.168857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.168889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.169247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.169279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.169614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.169654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.169915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.169946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.170291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.170324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.170681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.170714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.171152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.171502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.171533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.171898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.171931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.172320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.172351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.172708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.172741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.173142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.173173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.173547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.173579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.173938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.173971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.174334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.174370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.174731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.174767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.175115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.175147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.175515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.175546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.175912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.175945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.176278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.176311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.176669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.176701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.177059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.177091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.177417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.177447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.177798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.177830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.178191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.178223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.178601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.178633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.179020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.179051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.179398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.179430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.179787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.179819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-12-06 17:47:51.180178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.146 [2024-12-06 17:47:51.180211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.180609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.180651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.181035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.181068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.181428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.181460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.181812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.181844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.182215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.182246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.182588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.182617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.182974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.183006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.183403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.420 [2024-12-06 17:47:51.183436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.420 qpair failed and we were unable to recover it. 00:31:59.420 [2024-12-06 17:47:51.183795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.183827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.184187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.184582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.184614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.184970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.185009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.185371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.185403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.185766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.186151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.186182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.186612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.186650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.187001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.187033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.187395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.187428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.187781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.187813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.188171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.188202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.188601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.188633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.189025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.189057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.189378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.189410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.189770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.189802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.190167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.190200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.190556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.190588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.190943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.190975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.191332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.191364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.191726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.191758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.192119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.192151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.192392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.192423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.192774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.192807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.193198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.193565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.193597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.193997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.194030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.194390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.194420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.194815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.194848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.195200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.195232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.195591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.195623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.196016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.196048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.196279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.196309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.196668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.196701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.197051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.197082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.197439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.197471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.197838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.197870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.421 [2024-12-06 17:47:51.198228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.421 [2024-12-06 17:47:51.198260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.421 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.198624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.198663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.199009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.199040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.199393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.199424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.199780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.199814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.200172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.200203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.200567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.200599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.201000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.201034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.201392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.201425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.201782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.201815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.202185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.202217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.202578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.202611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.202969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.203001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.203359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.203390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.203745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.203776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.204142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.204173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.204522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.204555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.204917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.204949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.205308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.205339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.205699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.205731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.206097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.206128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.206485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.206516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.206880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.206911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.207268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.207299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.207673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.207706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.208049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.208080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.208438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.208469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.208809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.208842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.209187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.209219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.209468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.209499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.209874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.209906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.210262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.210294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.210709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.210749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.211156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.211188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.211517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.211554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.211891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.211923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.212176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.212209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.212501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.212533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.212907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.422 [2024-12-06 17:47:51.212940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.422 qpair failed and we were unable to recover it. 00:31:59.422 [2024-12-06 17:47:51.213211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.213245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.213599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.213632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.214003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.214035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.214329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.214360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.214710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.214742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.215125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.215156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.215518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.215548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.215915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.215948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.216307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.216340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.216696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.216730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.217065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.217096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.217471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.217503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.217868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.217901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.218258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.218290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.218665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.218718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.219052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.219085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.219452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.219482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.219815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.219847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.220219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.220250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.220617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.220661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.221016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.221047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.221389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.221421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.221786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.221824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.222119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.222150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.222524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.222554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.222913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.222946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.223300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.223332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.223670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.223702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.224053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.224086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.224443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.224475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.224851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.224883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.225242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.225272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.225626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.225664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.225935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.225965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.226318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.226352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.226716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.226749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.227140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.227173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.227554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.423 [2024-12-06 17:47:51.227586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.423 qpair failed and we were unable to recover it. 00:31:59.423 [2024-12-06 17:47:51.227979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.228012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.228363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.228395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.228759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.228793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.229160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.229192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.229588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.229619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.229977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.230011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.230368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.230399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.230645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.230676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.230913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.230945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.231297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.231330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.231678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.231711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.232095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.232128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.232514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.232545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.232771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.232804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.233217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.233249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.233607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.233647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.234005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.234036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.234396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.234429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.234794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.234828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.235196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.235231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.235586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.235619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.235936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.235968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.236301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.236333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.236696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.236728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.237086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.237116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.237479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.237512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.237894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.237927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.238283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.238315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.238687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.238720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.239091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.239121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.239372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.239403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.239779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.239812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.240063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.240095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.240450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.240480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.240814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.240846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.241080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.241110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.241497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.241529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.241893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.241924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.424 [2024-12-06 17:47:51.242288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.424 [2024-12-06 17:47:51.242320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.424 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.242682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.242715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.243100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.243358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.243389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.243766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.243799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.244158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.244188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.244561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.244592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.244878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.244910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.245261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.245292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.245663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.245698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.246058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.246089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.246462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.246493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.246877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.246909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.247259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.247292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.247669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.247707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.248064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.248094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.248459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.248491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.248864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.248897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.249257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.249290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.249663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.249695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.250044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.250075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.250451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.250483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.250846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.250877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.251234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.251265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.251513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.251544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.251907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.251941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.252282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.252317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.252701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.252733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.253092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.253125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.253480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.253513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.253915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.254322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.254355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.254719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.254753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.425 [2024-12-06 17:47:51.255051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.425 [2024-12-06 17:47:51.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.425 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.255462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.255494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.255733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.255765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.256136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.256167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.256527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.256559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.256925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.256958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.257314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.257345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.257773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.257805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.258169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.258207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.258428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.258458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.258596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.258625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.259026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.259058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.259426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.259458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.259861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.259893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.260247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.260280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.260657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.260689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.261044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.261078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.261435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.261467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.261827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.261862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.262254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.262285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.262636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.262677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.263012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.263044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.263398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.263430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.263790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.263822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.264185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.264216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.264578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.264610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.264990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.265023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.265242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.265273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.265660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.265694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.266059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.266092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.266456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.266487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.266825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.266859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.267221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.267252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.267609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.267647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.267978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.268011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.268363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.268400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.268756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.268788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.269043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.269074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.269426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.426 [2024-12-06 17:47:51.269457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.426 qpair failed and we were unable to recover it. 00:31:59.426 [2024-12-06 17:47:51.269814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.269846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.270204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.270613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.270659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.271020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.271051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.271421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.271453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.271811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.271843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.272182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.272213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.272587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.272618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.273025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.273057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.273407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.273439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.273805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.273839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.274203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.274236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.274652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.274685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.275046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.275481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.275514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.275872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.275906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.276308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.276676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.276707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.277109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.277140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.277487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.277520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.277871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.277903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.278255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.278287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.278650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.278684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.279095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.279127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.279479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.279512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.279881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.280262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.280295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.280658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.280691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.281042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.281073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.281432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.281465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.281826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.281858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.282207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.282240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.282597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.282628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.282988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.283020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.283382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.283415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.283836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.283869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.284217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.284250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.284608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.427 [2024-12-06 17:47:51.284648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.427 qpair failed and we were unable to recover it. 00:31:59.427 [2024-12-06 17:47:51.285026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.285057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.285425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.285458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.285786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.285817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.286167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.286198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.286588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.286619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.286973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.287004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.287370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.287402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.287764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.287796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.288163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.288195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.288551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.288585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.288969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.289324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.289355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.289676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.289708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.290065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.290096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.290445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.290823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.290854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.291212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.291243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.291584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.291616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.292023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.292057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.292416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.292448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.292800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.292832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.293188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.293219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.293576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.293608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.293981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.294013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.294357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.294390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.294743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.294776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.295146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.295183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.295534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.295564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.295903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.295935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.296285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.296316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.296708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.297094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.297125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.297484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.297515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.297891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.297924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.298330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.298361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.298713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.298746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.299147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.299178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.299564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.428 qpair failed and we were unable to recover it. 00:31:59.428 [2024-12-06 17:47:51.299946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.428 [2024-12-06 17:47:51.299979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.300315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.300347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.300702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.300734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.301098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.301132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.301558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.301590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.301946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.301978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.302340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.302373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.302733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.302764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.303129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.303161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.303527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.303559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.303920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.303953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.304301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.304333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.304688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.304720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.305092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.305123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.305482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.305515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.305878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.305920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.306300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.306333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.306692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.306727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.307101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.307132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.307497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.307529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.307924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.308278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.308311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.308548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.308579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.308941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.308973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.309328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.309360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.309730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.309763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.310126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.310157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.310513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.310544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.310908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.310940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.311293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.311326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.311676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.311710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.312092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.312124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.312377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.312407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.312757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.312788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.429 [2024-12-06 17:47:51.313182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.429 [2024-12-06 17:47:51.313218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.429 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.313544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.313577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.313927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.313960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.314197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.314227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.314587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.314618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.315007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.315040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.315407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.315438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.315799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.315834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.316228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.316258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.316607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.316662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.317035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.317067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.317432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.317465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.317823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.318244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.318673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.318706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.319057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.319090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.319455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.319486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.319816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.319850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.320212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.320243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.320682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.320714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.321062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.321093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.321471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.321503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.321867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.321901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.322252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.322284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.322658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.322690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.323051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.323083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.323445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.323477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.323819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.323852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.324184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.324216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.324570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.324601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.324996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.325029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.325384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.325416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.325799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.325832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.326193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.326226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.326596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.326628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.327016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.327048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.327412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.327446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.327801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.327834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.328185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.430 [2024-12-06 17:47:51.328216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.430 qpair failed and we were unable to recover it. 00:31:59.430 [2024-12-06 17:47:51.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.328610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.328971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.329004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.329358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.329389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.329748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.329780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.330141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.330172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.330533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.330564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.330812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.330846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.331197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.331228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.331585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.331616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.331977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.332008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.332260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.332299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.332701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.333049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.333081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.333431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.333464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.333817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.333850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.334206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.334238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.334597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.334629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.335062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.335093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.335449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.335480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.335839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.335872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.336300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.336330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.336665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.336699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.337045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.337075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.337314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.337348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.337705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.337738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.338091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.338122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.338486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.338519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.338918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.339268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.339300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.339656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.339688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.340078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.340426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.340458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.340813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.340845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.341201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.341232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.341596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.341629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.341990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.342022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.342371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.342402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.342769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.342807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.431 qpair failed and we were unable to recover it. 00:31:59.431 [2024-12-06 17:47:51.343191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.431 [2024-12-06 17:47:51.343223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.343614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.343982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.344014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.344371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.344402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.344754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.344786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.345179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.345210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.345555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.345588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.345948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.345981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.346331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.346365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.346600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.346634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.346996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.347029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.347314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.347344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.347709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.347742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.347977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.348011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.348376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.348407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.348766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.348799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.349160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.349192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.349552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.349584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.349946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.349979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.350333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.350364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.350730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.350763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.351128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.351161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.351513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.351543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.351905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.351937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.352288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.352321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.352677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.352709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.353095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.353132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.353486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.353520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.353871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.353904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.354276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.354308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.354663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.354695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.355067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.355099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.355459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.355490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.355879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.356276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.356309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.356651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.356684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.357044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.357075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.357440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.357471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.432 [2024-12-06 17:47:51.357808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.432 [2024-12-06 17:47:51.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.432 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.358198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.358229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.358581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.358614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.358980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.359012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.359370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.359401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.359798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.359832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.360193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.360226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.360581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.360614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.361015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.361047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.361475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.361507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.361875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.361907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.362172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.362544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.362574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.362905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.362939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.363289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.363321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.363675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.363708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.364100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.364132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.364490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.364523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.364888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.364921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.365269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.365303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.365660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.365692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.366053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.366085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.367854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.367921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.368320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.368356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.368733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.368769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.369151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.369182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.369540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.369573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.369954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.369988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.370348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.370380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.370741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.370777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.372464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.372908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.372944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.373278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.373312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.373663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.373696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.374086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.374118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.374480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.374513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.374878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.374910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.375301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.375334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.375556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.433 [2024-12-06 17:47:51.375588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.433 qpair failed and we were unable to recover it. 00:31:59.433 [2024-12-06 17:47:51.375936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.375967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.376328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.376359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.376666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.376698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.377059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.377091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.377452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.377485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.377780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.377812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.378168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.378199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.378553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.378585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.378944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.378978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.379339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.379372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.379742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.379774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.380138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.380172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.380525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.380557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.380922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.380956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.381322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.381354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.381711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.381745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.382101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.382134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.382487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.382525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.382881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.382916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.383274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.383304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.383540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.383571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.383944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.383976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.384341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.384372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.384736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.384770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.385140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.385171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.385515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.385547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.385910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.385943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.386302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.386334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.386692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.386727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.386972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.387005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.387357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.387390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.387746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.387778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.388152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.388186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.388543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.388573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.434 [2024-12-06 17:47:51.388925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.434 [2024-12-06 17:47:51.388960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.434 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.389323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.389356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.389714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.389748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.390111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.390143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.390492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.390523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.390864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.390896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.391243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.391275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.391622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.391663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.392041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.392076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.392427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.392457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.392831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.392871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.393208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.393240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.393636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.393677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.394048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.394079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.394442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.394476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.394815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.394847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.395204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.395237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.395599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.395630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.395945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.395979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.396329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.396361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.396733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.396767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.397136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.397167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.397522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.397555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.397922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.397954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.398307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.398338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.398708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.398741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.399103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.399137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.399497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.399529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.399896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.399929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.400280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.400312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.400669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.400703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.401024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.401057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.401431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.401462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.401818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.401851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.402206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.402239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.402595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.402627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.403030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.403062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.403410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.403443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.403795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.435 [2024-12-06 17:47:51.403828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.435 qpair failed and we were unable to recover it. 00:31:59.435 [2024-12-06 17:47:51.404193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.404225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.404579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.404611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.405006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.405039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.405441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.405473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.405819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.405854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.406267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.406298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.406685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.406719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.407075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.407108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.407454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.407486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.407853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.407886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.408254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.408287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.408631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.408672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.409031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.409065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.409416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.409449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.409808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.409840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.410198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.410230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.410581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.410614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.410974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.411006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.411361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.411393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.411753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.411788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.412150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.412549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.412581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.412970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.413004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.413357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.413389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.413746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.413779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.414141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.414174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.414568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.414601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.414979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.415014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.415368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.415401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.415754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.415787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.416136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.416169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.416527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.416560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.416890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.416921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.417272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.417304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.417666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.417698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.418104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.418135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.418483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.418515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.418873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.418911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.436 qpair failed and we were unable to recover it. 00:31:59.436 [2024-12-06 17:47:51.419290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.436 [2024-12-06 17:47:51.419321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.419699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.419739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.420094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.420126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.420490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.420521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.420887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.420920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.421299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.421686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.421719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.422064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.422097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.422452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.422483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.422855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.422887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.423248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.423281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.423636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.423676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.424039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.424442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.424475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.424823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.424855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.425217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.425249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.425618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.425659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.426026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.426058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.426417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.426449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.426832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.426866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.427239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.427270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.427618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.427658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.428014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.428047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.428404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.428435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.428796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.428828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.429185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.429216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.429572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.429605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.429923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.429956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.430329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.430367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.430729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.430764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.431174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.431206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.431568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.431600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.432000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.432032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.432392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.432424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.432787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.432819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.433177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.433208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.433580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.433615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.433973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.437 [2024-12-06 17:47:51.434004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.437 qpair failed and we were unable to recover it. 00:31:59.437 [2024-12-06 17:47:51.434369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.434402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.434762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.434797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.435152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.435184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.435544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.435577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.435939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.435972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.436341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.436372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.436718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.436749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.437113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.437145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.437511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.437543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.437911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.437944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.438296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.438329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.438693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.438725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.439094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.439127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.439484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.439517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.439900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.439932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.440291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.440325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.440680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.440711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.441110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.441146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.441472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.441505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.441863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.441896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.442254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.442285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.442671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.442703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.443067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.443099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.443445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.443477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.443815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.443846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.444210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.444244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.444598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.444628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.445025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.445058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.445419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.445452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.445825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.445857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.446220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.446252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.446606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.446649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.447083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.447114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.447478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.447511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.447867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.447900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.448259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.448292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.448658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.448690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.449046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.449079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.438 [2024-12-06 17:47:51.449444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.438 [2024-12-06 17:47:51.449474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.438 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.449842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.449875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.450134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.450165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.450536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.450567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.450934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.450968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.451331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.451363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.451734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.451768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.452178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.452211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.452556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.452589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.452994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.453027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.453384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.453416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.453778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.453811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.454176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.454208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.454566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.454601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.455027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.455060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.455427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.455680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.455718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.456090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.456122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.456483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.456515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.456880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.456913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.457269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.457302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.457679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.457713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.458057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.458091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.458446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.458477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.458722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.458753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.459183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.459215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.459567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.459599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.459963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.459995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.460352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.460386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.460724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.460756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.461000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.461031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.461410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.461439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.461804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.461835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.462192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.462225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.439 [2024-12-06 17:47:51.462578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.439 [2024-12-06 17:47:51.462610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.439 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.462865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.462899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.463277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.463312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.463675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.463707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.464104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.464137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.464493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.464525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.464876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.464909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.465278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.465312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.465558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.465593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.466011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.466044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.466279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.466315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.466684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.466720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.467029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.467061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.467298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.467337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.467687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.467722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.468148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.468474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.468508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.468756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.468791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.469053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.469088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.469463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.469496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.469847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.469881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.470233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.470267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.470525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.470557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.470889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.470921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.471281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.471309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.471665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.471694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.472067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.472095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.472392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.472421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.472712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.472743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.472940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.472968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.473227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.473255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.440 [2024-12-06 17:47:51.473507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.440 [2024-12-06 17:47:51.473539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.440 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.473865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.473896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.474273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.474302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.474671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.474704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.475069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.475099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.475456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.475486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.475858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.475891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.476252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.476283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.476656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.476690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.477073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.477113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.477346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.477378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.477726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.477761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.478186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.478220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.478471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.478504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.478865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.478898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.479255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.479287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.479613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.479656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.480021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.480054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.480414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.480447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.480840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.480875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.481234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.481268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.481656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.481690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.481953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.481990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.482275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.482656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.482690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.483036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.483068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.483432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.483466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.483823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.483856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.484291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.484323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.484711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.484745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.485121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.485155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.485386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.485420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.485780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.485813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.486188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.486220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.486472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.486505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.486869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.486901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.487260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.487294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.487659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.487691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.488054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.488086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.488305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.488337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.488696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.488728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.489107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.489139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.489354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.489384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.489740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.489773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.490137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.490169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.490455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.490487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.490864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.490898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.491248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.491279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.491655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.712 [2024-12-06 17:47:51.491689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.712 qpair failed and we were unable to recover it. 00:31:59.712 [2024-12-06 17:47:51.492051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.492084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.492444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.492476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.492816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.492849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.493213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.493246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.493601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.493633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.494033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.494066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.494420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.494451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.494821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.494854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.495228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.495260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.495508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.495541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.495888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.495920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.496291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.496323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.496573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.496605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.496939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.496970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.497322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.497353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.497625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.497666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.498027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.498058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.498422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.498456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.498806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.498838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.499203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.499235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.499607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.499647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.500004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.500037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.500393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.500427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.500792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.500824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.501069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.501100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.501459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.501490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.501865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.501899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.502253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.502286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.502694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.502733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.503117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.503149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.503508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.503540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.503915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.503948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.504383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.504415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.504750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.504783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.505141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.505174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.505511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.505543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.505907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.506306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.506338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.506739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.506771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.507129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.507164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.507523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.507553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.507919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.507953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.508343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.508375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.508744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.508777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.509138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.509170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.509534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.509567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.509908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.509940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.510329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.510361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.510754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.510788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.511144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.511174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.511544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.511575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.511964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.511997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.512364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.512397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.512759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.512791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.513178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.513209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.513461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.513497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.513893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.513925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.514290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.514323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.514561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.514592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.514907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.514940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.515283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.515315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.515678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.515712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.516098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.516131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.516513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.516545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.516893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.516927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.517287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.517319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.517676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.517709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.518102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.518133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.518491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.518523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.518868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.518900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.519274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.519307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.519670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.519703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.520061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.713 [2024-12-06 17:47:51.520091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.713 qpair failed and we were unable to recover it. 00:31:59.713 [2024-12-06 17:47:51.520443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.520474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.520826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.520857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.521302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.521333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.521697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.521729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.522084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.522116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.522472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.522502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.522870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.522905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.523339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.523373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.523730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.523765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.524120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.524157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.524520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.524551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.524890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.524925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.525275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.525307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.525664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.525696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.526059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.526091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.526440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.526472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.526828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.526860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.527215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.527246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.527607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.527649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.528010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.528044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.528404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.528435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.528793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.528828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.529189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.529220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.529572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.529604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.529988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.530023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.530372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.530404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.530790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.530822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.531198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.531230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.531587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.531620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.531998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.532030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.532376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.532407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.532781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.532813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.533169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.533200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.533626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.533665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.533997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.534029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.534262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.534296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.534674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.534707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.535059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.535093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.535448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.535480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.535859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.535892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.536243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.536276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.536650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.536683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.537084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.537115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.537444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.537476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.537832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.537865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.538222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.538620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.538671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.539048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.539081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.539437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.539469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.539735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.539767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.540155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.540188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.540539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.540571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.540931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.540963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.541316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.541349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.541707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.541740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.542145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.542501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.542533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.542897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.542929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.543287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.543320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.543672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.543704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.544068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.544101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.544481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.544512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.544873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.544905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.545256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.545289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.545650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.545685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.546059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.546092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.546444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.714 [2024-12-06 17:47:51.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.714 qpair failed and we were unable to recover it. 00:31:59.714 [2024-12-06 17:47:51.546818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.546852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.547211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.547244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.547600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.547632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.547988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.548021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.548381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.548413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.548758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.548790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.549143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.549175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.549538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.549570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.549928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.549960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.550311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.550343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.550706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.550745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.551100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.551132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.551495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.551526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.551967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.551999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.552348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.552381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.552737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.552769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.553141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.553172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.553527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.553560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.553921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.553954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.554314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.554345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.554597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.554630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.555029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.555062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.555409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.555441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.555810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.555844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.556236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.556588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.556621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.557021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.557053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.557406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.557438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.557831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.557865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.558259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.558291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.558516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.558550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.558906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.558939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.559290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.559323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.559694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.559727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.560091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.560124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.560525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.560892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.560924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.561283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.561321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.561705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.561737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.561999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.562032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.562378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.562408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.562748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.562785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.563142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.563174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.563534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.563566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.563897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.563929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.564164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.564195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.564559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.564592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.564953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.564988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.565344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.565375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.565737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.565771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.566133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.566166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.566540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.566573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.566804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.566839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.567210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.567244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.567600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.567982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.568014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.568369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.568402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.568768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.568801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.569232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.569264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.569656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.570007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.570041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.570423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.570454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.570713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.570745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.571149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.571179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.571526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.571559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.571962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.571995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.572351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.572382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.572752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.572787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.573159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.573189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.573580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.573612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.573977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.574010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.574369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.574400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.715 [2024-12-06 17:47:51.574763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.715 [2024-12-06 17:47:51.574797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.715 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.575163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.575194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.575533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.575565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.575922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.575955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.576310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.576343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.576705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.576737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.577129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.577163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.577520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.577552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.577906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.577939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.578307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.578338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.578816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.579074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.579106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.579465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.579495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.579865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.579899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.580329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.580361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.580717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.580752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.581122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.581153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.581509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.581540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.581891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.581922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.582285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.582315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.582681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.582715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.583075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.583107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.583468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.583501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.583748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.583781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.584137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.584169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.584532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.584565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.584915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.584948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.585320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.585353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.585589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.585625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.585893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.585930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.586285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.586318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.586678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.586710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.587096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.587129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.587522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.587880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.587913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.588264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.588296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.588660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.588693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.589092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.589124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.589477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.589509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.589885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.589918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.590286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.590319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.590699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.591063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.591095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.591457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.591491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.591821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.591854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.592238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.592590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.592623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.592974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.593006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.593364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.593397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.593759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.593792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.594157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.594188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.594543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.594576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.594938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.594971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.595196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.595227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.595626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.595667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.595934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.595965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.596317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.596347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.596704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.596738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.597035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.597067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.597424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.597456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.597809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.597849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.598212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.598244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.598497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.598528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.598894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.598926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.599287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.599318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.599698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.599732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.600083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.600114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.600469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.600501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.600862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.600894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.601335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.601367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.601722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.601757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.602022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.602054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.602390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.602422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.602778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.716 [2024-12-06 17:47:51.602811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.716 qpair failed and we were unable to recover it. 00:31:59.716 [2024-12-06 17:47:51.603174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.603206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.603566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.603598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.603958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.603990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.604360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.604392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.604771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.604804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.605142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.605175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.605522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.605553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.605910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.605941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.606295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.606327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.606726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.606759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.607120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.607151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.607519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.607552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.607904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.607938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.608312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.608356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.608730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.608762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.609161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.609194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.609539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.609572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.609980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.610385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.610417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.610769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.610802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.611152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.611184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.611532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.611566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.611968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.612001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.612353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.612385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.612745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.612777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.613142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.613173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.613567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.613928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.613961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.614215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.614246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.614416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.614447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.614769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.614801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.615161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.615193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.615526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.615558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.615913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.615947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.616348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.616380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.616729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.616761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.617107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.617139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.617489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.617522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.617898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.617931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.618282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.618316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.618750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.618783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.619144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.619177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.619543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.619575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.619935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.619968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.620322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.620354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.620707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.620738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.621048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.621080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.621429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.621461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.621822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.621854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.622202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.622233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.622598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.622629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.623023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.623056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.623427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.623458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.623822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.623853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.624207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.624244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.624598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.624629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.625019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.625052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.625306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.625338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.625695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.625730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.626080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.626112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.626511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.626543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.626902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.626941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.627296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.627326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.627687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.627719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.628083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.628115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.628469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.628502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.628887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.628919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.629256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.717 [2024-12-06 17:47:51.629290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.717 qpair failed and we were unable to recover it. 00:31:59.717 [2024-12-06 17:47:51.629650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.629682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.630039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.630071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.630425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.630456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.630815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.630848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.631214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.631245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.631610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.631652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.632002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.632033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.632389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.632421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.632780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.632812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.633163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.633195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.633556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.633590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.634023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.634055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.634407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.634439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.634816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.634856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.635217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.635249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.635482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.635513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.635860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.635892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.636254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.636286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.636679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.637052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.637083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.637438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.637470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.637856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.637890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.638243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.638274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.638647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.638680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.639025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.639058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.639418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.639451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.639805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.639838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.640199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.640231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.640581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.640614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.640962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.640994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.641368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.641399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.641778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.641811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.642160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.642193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.642530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.642562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.642939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.642974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.643323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.643355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.643724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.643758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.644122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.644155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.644511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.644545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.644907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.644939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.645306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.645344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.645694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.645728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.646091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.646122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.646485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.646517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.646881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.646915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.647275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.647307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.647679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.647711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.648108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.648140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.648493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.648526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.648893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.648925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.649279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.649311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.649658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.649690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.650043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.650075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.650431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.650463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.650820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.650855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.651208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.651240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.651596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.651629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.652052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.652084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.652438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.652469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.652820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.652853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.653208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.653240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.653614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.653652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.654009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.654043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.654400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.654432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.654799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.654831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.655187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.655221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.655581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.655614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.655882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.655914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.656284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.656318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.656670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.656705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.657099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.657130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.657492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.657523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.657884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.718 [2024-12-06 17:47:51.657917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.718 qpair failed and we were unable to recover it. 00:31:59.718 [2024-12-06 17:47:51.658279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.658310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.658677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.658710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.659062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.659095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.659451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.659483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.659731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.659763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.660116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.660147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.660509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.660541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.660897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.660930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.661282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.661315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.661563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.661593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.661990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.662022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.662377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.662409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.662782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.662816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.663178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.663210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.663568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.663601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.663993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.664026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.664379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.664411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.664746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.664779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.665129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.665161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.665402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.665433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.665798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.665833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.666184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.666215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.666566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.666600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.666987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.667019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.667275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.667306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.667661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.667693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.668053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.668441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.668473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.668844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.668878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.669227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.669258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.669618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.669660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.670017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.670049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.670404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.670437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.670795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.670829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.671176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.671209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.671558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.671594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.671991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.672024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.672376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.672409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.672779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.672811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.673145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.673177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.673501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.673532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.673787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.673819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.674182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.674214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.674567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.674600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.674959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.674992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.675350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.675382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.675739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.675772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.676136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.676168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.676532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.676566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.676903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.676936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.677290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.677323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.677676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.677708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.678051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.678085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.678438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.678469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.678839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.678874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.679131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.679161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.679542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.679573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.679904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.679936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.680293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.680323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.680699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.680733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.681094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.681125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.681523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.681555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.681943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.681981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.682335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.682369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.682720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.682753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.683115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.683148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.683392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.683424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.683771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.719 [2024-12-06 17:47:51.683804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.719 qpair failed and we were unable to recover it. 00:31:59.719 [2024-12-06 17:47:51.684162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.684194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.684552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.684584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.684950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.684981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.685343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.685374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.685738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.685770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.686127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.686158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.686482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.686513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.686878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.687277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.687310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.687669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.687703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.688065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.688098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.688447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.688478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.688865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.688898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.689246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.689278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.689631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.689672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.690014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.690046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.690413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.690444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.690787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.690819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.691178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.691209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.691565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.691598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.691961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.691994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.692338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.692377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.692715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.692749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.693103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.693136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.693491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.693523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.693891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.693923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.694272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.694303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.694674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.694708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.695068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.695504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.695536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.695890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.695924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.696276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.696308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.696668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.696700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.697062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.697093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.697457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.697491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.697858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.697891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.698250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.698282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.698667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.698700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.699048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.699079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.699441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.699472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.699843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.699877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.700228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.700260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.700617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.700660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.701026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.701057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.701418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.701450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.701811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.701844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.702204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.702236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.702596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.702627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.702990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.703024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.703387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.703418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.703781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.703813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.704171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.704203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.704553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.704585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.704986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.705018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.705390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.705424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.705789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.705822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.706158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.706188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.706554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.706586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.706951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.706985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.707340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.707372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.707729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.707763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.708135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.708167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.708536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.708575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.708930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.708963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.709325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.709357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.709722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.709756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.710113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.710146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.710505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.710536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.710903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.710934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.711288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.711319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.720 [2024-12-06 17:47:51.711678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.720 [2024-12-06 17:47:51.711709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.720 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.712066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.712097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.712502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.712534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.712877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.712909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.713141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.713171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.713543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.713574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.713925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.713958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.714346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.714377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.714626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.714682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.715061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.715091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.715450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.715484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.715930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.715965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.716312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.716345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.716706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.716738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.716991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.717021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.717369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.717400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.717776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.717808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.718037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.718067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.718340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.718371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.718733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.718772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.719127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.719160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.719414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.719444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.719805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.719838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.720113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.720144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.720495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.720525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.720879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.720911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.721154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.721184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.721536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.721568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.721939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.721973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.722330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.722361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.722792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.722825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.723191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.723221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.723585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.723616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.724021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.724054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.724393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.724427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.724788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.724820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.725186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.725217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.725569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.725600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.725960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.725994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.726352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.726384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.726763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.726798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.727152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.727183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.727539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.727570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.727927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.727960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.728318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.728350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.728708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.728742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.729113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.729152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.729441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.729472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.729830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.729862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.730228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.730260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.730622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.730679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.731013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.731044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.731408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.731439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.731792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.731825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.732067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.732098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.732453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.732484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.732851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.732882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.733250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.733281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.733651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.733685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.733928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.733959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.734265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.734296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.734670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.734704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.735062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.735093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.735454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.735485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.735849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.735884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.736242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.736273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.736653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.736688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.737075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.737437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.737469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.737816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.737848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.738211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.721 [2024-12-06 17:47:51.738243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.721 qpair failed and we were unable to recover it. 00:31:59.721 [2024-12-06 17:47:51.738598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.738630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.739054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.739094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.739326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.739358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.739581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.739611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.739984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.740018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.740373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.740405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.740854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.741240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.741273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.741628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.741671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.742072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.742104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.742462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.742492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.742872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.742905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.743284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.743317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.743683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.744126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.744159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.744517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.744550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.744906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.744940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.745290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.745322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.745693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.745726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.746104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.746136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.746535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.746567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.746942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.746975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.747344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.747377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.747731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.747763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.748112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.748144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.748501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.748534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.748969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.749002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.749353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.749386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.749752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.749785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.750147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.750180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.750542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.750574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.750942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.750974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.751340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.751371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.751724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.751756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.752118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.752148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.752523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.752554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.752915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.752947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.753305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.753338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.753706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.753740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.754092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.754123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.754491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.754523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.754820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.754852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.755298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.755330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.755688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.755727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.756097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.756129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.756539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.756571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.756929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.756963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.757314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.757347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.757602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.757634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.758038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.758071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.758314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.758343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.758716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.758749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.759121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.759152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.759403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.759434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.759794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.759826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.760204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.760236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.760685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.760718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.761125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.761160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.761516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.761548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.761909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.761941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.762294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.762326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.762691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.762725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.763125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.763157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.763526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.763558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.763886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.763918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.764303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.764335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.764694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.764726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.765063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.722 [2024-12-06 17:47:51.765097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.722 qpair failed and we were unable to recover it. 00:31:59.722 [2024-12-06 17:47:51.765340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.723 [2024-12-06 17:47:51.765373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.723 qpair failed and we were unable to recover it. 00:31:59.723 [2024-12-06 17:47:51.765726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.723 [2024-12-06 17:47:51.765758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.723 qpair failed and we were unable to recover it. 00:31:59.723 [2024-12-06 17:47:51.766149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.723 [2024-12-06 17:47:51.766192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.723 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.766579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.766614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.767016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.767050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.767185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.767220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.767634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.767683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.768040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.768073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.768295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.768327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.768706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.768739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.769125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.769158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.769510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.769543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.769881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.769914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.770279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.770312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.770710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.770744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.771198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.771231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.771578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.771613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.771987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.772019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.772365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.772397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.772766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.772800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.773160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.773192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.773558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.773590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.773867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.773904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.774262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.774295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.774676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.774711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.775060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.775095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.775436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.775469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.775833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.775866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.776216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.776248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.776607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.776657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.777038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.777069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.777318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.777349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.777697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.777729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.778096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.778127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.778485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.778517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.778875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.778907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.779335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.779366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.779724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.995 [2024-12-06 17:47:51.779758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.995 qpair failed and we were unable to recover it. 00:31:59.995 [2024-12-06 17:47:51.780108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.780140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.780487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.780519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.780880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.780912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.781274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.781305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.781548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.781583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.781979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.782013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.782368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.782404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.782760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.782793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.783157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.783189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.783544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.783578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.783946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.783979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.784334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.784367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.784715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.784747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.785125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.785158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.785419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.785450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.785804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.785837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.786199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.786230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.786566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.786597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.786877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.786909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.787265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.787297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.787694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.788071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.788104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.788456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.788488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.788819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.788852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.789205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.789237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.789481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.789514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.789919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.789952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.790346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.790694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.791125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.791155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.791502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.791534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.791907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.791938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.792309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.792347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.792703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.792737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.793170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.793201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.793551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.793581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.793943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.793975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.794329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.794362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.794718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.794750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.794996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.795027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.795389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.795421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.795789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.795820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.796212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.796243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.796595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.796628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.797022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.797053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.797408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.797439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.797805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.797838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.798196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.798228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.798585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.798617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.799026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.799058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.799423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.799456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.799803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.799835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.800197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.800230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.800581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.800613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.800987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.801020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.801394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.801425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.801783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.801819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.802195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.802226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.802593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.802625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.803018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.803056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.803406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.803439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.803799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.803833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.804195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.804229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.804560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.804592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.804959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.804993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.805353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.805384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.805743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.805776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.806136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.806168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.806530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.806561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.806931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.996 [2024-12-06 17:47:51.806965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.996 qpair failed and we were unable to recover it. 00:31:59.996 [2024-12-06 17:47:51.807313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.807344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.807698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.807729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.808099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.808129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.808483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.808516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.808778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.808811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.809183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.809215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.809574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.809605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.809947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.809981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.810333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.810366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.810724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.810757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.811107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.811140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.811504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.811537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.811891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.811926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.812296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.812328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.812579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.812610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.812975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.813008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.813376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.813414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.813768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.813802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.814149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.814181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.814535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.814567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.814897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.814930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.815295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.815327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.815631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.815675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.816027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.816058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.816418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.816450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.816810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.816842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.817093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.817123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.817364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.817394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.817770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.817804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.818166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.818197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.818557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.818589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.818989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.819022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.819380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.819413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.819768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.819800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.820135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.820166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.820517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.820548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.820909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.820943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.821309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.821341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.821694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.821727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.822080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.822111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.822472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.822505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.822857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.823218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.823250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.823621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.823684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.824055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.824087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.824455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.824487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.824858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.824891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.825247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.825280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.825651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.825683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.826038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.826069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.826456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.826815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.826847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.827200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.827232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.827590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.827622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.828007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.828040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.828407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.828438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.828787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.828822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.829176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.829209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.829563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.829597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.829974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.830008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.830356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.830388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.830782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.830815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.831166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.831199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.831591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.831623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.831986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.832018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.832413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.832445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.997 [2024-12-06 17:47:51.832791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.997 [2024-12-06 17:47:51.832824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.997 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.833185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.833219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.833566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.833598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.833959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.833992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.834347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.834378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.834758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.834797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.835154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.835187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.835548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.835579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.835957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.835989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.836343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.836376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.836717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.836751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.837097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.837129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.837494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.837525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.837897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.837929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.838297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.838328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.838693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.838726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.839104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.839135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.839498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.839531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.839898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.839937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.840295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.840329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.840686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.840720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.841073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.841104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.841459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.841492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.841825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.841858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.842213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.842245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.842603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.842634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.843020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.843054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.843411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.843442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.843812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.843846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.844181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.844213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.844567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.844600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.844980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.845012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.845369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.845404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.845780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.845813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.846168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.846201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.846566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.846597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.846965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.846999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.847367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.847398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.847753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.847785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.848134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.848166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.848523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.848555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.848921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.848953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.849307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.849340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.849700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.849732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.850095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.850126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.850481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.850518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.850766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.850797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.851045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.851076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.851439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.851473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.851824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.851857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.852224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.852256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.852610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.852651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.853013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.853045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.853401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.853432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.853789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.853821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.854176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.854207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.854574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.854606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.855001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.855035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.855395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.855427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.855690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.855723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.856140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.856171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.856527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.856559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.856922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.856955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.857308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.857341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.857687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.857719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.858071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.858103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.858460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.858492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.858871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.858904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.859263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.859295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.859701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.859734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.860083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.860116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.860473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.860504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.860877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.860909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.861271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.861302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.861541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.861571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.861950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.861982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.998 qpair failed and we were unable to recover it. 00:31:59.998 [2024-12-06 17:47:51.862346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.998 [2024-12-06 17:47:51.862379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.862737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.862770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.863123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.863156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.863513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.863544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.863897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.863932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.864292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.864322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.864676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.864707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.865064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.865094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.865446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.865478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.865867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.865900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.866299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.866331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.866684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.866717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.867076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.867110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.867468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.867499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.867866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.867899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.868253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.868284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.868651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.868682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.869035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.869066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.869424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.869457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.869849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.870205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.870236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.870600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.870633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.871028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.871412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.871445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.871792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.871826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.872191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.872222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.872579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.872611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.873007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.873040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.873407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.873439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.873802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.873834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.874196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.874228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.874579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.874611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.875016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.875049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.875396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.875428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.875781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.875814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.876174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.876205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.876565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.876598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.876962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.877002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.877353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.877385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.877791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.877827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.878173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.878205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.878553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.878584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.879020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.879053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.879405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.879437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.879810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.879842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.880201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.880232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.880624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.880666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.881016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.881047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.881275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.881305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.881662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.881695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.882052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.882084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.882465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.882497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.882675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.882707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.883041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.883074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.883425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.883458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.883810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.884213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.884245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.884593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.884625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.884992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.885024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.885382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.885413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.885770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.885803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.886173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.886204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.886562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.886594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.886994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.887027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.887378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.887416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.887774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.887807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.888178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.888210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.888581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.888614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.888981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.889012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.889362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.889394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.889747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.889779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.890061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:59.999 [2024-12-06 17:47:51.890091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:31:59.999 qpair failed and we were unable to recover it. 00:31:59.999 [2024-12-06 17:47:51.890453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.890484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.890847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.890881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.891229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.891261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.891621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.891663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.892009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.892039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.892399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.892432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.892816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.892848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.893207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.893240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.893596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.893628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.893995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.894027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.894379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.894411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.894770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.894803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.895178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.895210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.895567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.895600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.895998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.896031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.896389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.896420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.896778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.896809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.897169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.897201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.897559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.897590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.897953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.897992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.898339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.898370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.898735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.898770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.899137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.899168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.899540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.899574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.899933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.899965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.900327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.900360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.900719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.900751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.901126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.901158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.901514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.901544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.901907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.901938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.902294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.902327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.902698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.902730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.903095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.903128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.903484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.903516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.903881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.903915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.904267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.904298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.904702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.905068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.905098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.905451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.905483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.905881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.905914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.906260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.906292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.906710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.907065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.907097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.907449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.907480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.907824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.907856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.908216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.908246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.908607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.908649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.909014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.909046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.909405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.909438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.909790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.909822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.910183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.910215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.910570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.910602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.910997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.911031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.911383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.911415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.911775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.911809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.912167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.912200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.912554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.912584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.912940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.912974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.913367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.913399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.913752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.913783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.914141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.914173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.914530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.914563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.914929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.914960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.915352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.915384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.915733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.915765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.916130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.916161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.916518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.916548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.916905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.916936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.917293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.917326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.917676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.917708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.918079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.918112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.918458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.918489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.918827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.918859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.919215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.919246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.919614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.919664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.920022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.920054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.920445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.920476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.920801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.920833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.921198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.921229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.921591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.921624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.922014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.922046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.922348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.922379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.922747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.922778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.923139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.000 [2024-12-06 17:47:51.923171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.000 qpair failed and we were unable to recover it. 00:32:00.000 [2024-12-06 17:47:51.923521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.923554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.923915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.923947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.924309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.924342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.924689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.924733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.925123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.925156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.925513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.925545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.925910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.925945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.926297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.926329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.926686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.926720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.927069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.927103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.927465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.927499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.927869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.927903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.928274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.928667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.929051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.929083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.929452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.929485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.929822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.929854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.930211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.930246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.930611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.930654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.931014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.931046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.931450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.931482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.931843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.931877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.932228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.932260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.932493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.932524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.932850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.932883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.933244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.933623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.933666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.934020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.934053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.934405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.934436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.934780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.934811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.935179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.935216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.935585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.935616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.936017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.936050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.936293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.936323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.936674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.936707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.937061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.937091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.937445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.937475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.937848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.937880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.938134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.938165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.938554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.938585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.938951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.938984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.939339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.939372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.939753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.940159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.940190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.940543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.940576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.940952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.940987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.941337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.941369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.941735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.941768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.942133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.942166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.942522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.942554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.942910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.942943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.943307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.943340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.943696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.943728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.944084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.944121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.944384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.944420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.944766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.944802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.945166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.945197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.945560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.945593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.946011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.946047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.946400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.946434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.946801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.946834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.947224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.947257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.947598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.947630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.948024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.948056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.948453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.948485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.948847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.948882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.949228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.949260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.949600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.949633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.950028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.950061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.950425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.950456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.950697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.950729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.951012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.951045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.951391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.951424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.951700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.952046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.952078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.952430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.952462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.952821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.952853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.953099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.953132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.953483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.953514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.001 [2024-12-06 17:47:51.953894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.001 [2024-12-06 17:47:51.953926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.001 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.954278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.954309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.954678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.954711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.955073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.955110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.955464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.955497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.955835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.955869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.956250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.956283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.956662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.956696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.957049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.957081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.957307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.957340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.957715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.957748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.958089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.958120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.958492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.958522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.958875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.958907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.959274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.959306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.959664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.959697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.960097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.960127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.960455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.960486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.960848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.960881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.961238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.961276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.961624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.961669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.961999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.962029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.962389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.962421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.962775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.962808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.963173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.963204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.963449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.963479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.963731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.963765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.964189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.964221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.964623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.964663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.965001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.965034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.965411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.965442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.965683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.965717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.966110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.966142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.966407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.966438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.966864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.966897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.967258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.967290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.967540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.967573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.967810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.967842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.968194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.968225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.968599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.968630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.968925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.968956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.969304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.969337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.969678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.969710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.970128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.970159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.970507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.970539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.970925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.971278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.971318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.971673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.971705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.971970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.972003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.972364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.972396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.972756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.972789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.973150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.973182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.973530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.973562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.973907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.973940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.974295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.974327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.974676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.974710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.975071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.975103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.975471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.975501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.975871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.975902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.976258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.976290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.976669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.976703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.977080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.977113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.977547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.977981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.978014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.978237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.978267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.978609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.978655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.979036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.979069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.979427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.979460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.979848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.979882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.980240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.980270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.980630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.980671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.980913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.980944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.981351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.981383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.981614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.981666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.981944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.981975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.982369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.982768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.982802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.983166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.983563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.983596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.983865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.983901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.984247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.984280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.984633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.984694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.984966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.984997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.985353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.985383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.985758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.002 [2024-12-06 17:47:51.985791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.002 qpair failed and we were unable to recover it. 00:32:00.002 [2024-12-06 17:47:51.986149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.986180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.986542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.986573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.986963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.987313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.987345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.987705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.987739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.988075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.988106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.988480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.988511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.988753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.988785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.989179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.989209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.989577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.989608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.989977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.990009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.990358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.990388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.990741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.990774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.991138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.991169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.991537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.991570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.991944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.991977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.992378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.992409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.992745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.992778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.993154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.993187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.993418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.993450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.993807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.993839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.994215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.994247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.994606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.994649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.994968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.995001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.995360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.995390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.995762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.995796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.996064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.996096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.996443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.996476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.996815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.996847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.997227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.997259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.997626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.997665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.997963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.997994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.998347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.998380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.998757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.998788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.999167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.999199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.999564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.999597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:51.999956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:51.999988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.000347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.000379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.000744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.000779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.001113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.001143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.001517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.001548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.001908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.001942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.002287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.002318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.002678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.003098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.003128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.003491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.003523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.003905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.003937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.004304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.004751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.004784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.005127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.005159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.005558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.005590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.005948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.005981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.006330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.006360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.006737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.006769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.007163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.007196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.007548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.007580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.007893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.007931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.008285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.008317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.008680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.008713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.008958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.008990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.009226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.009258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.009633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.009679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.003 [2024-12-06 17:47:52.010038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.003 [2024-12-06 17:47:52.010070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.003 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.010288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.010319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.010610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.010870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.010903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.011264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.011296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.011665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.011699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.012059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.012090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.012463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.012496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.012767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.012801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.013195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.013226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.013593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.013624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.014005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.014402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.014433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.014668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.014700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.015075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.015107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.015467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.015500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.015875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.015906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.016239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.016269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.016625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.016668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.017030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.017062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.017433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.017466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.017672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.017711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.018058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.018088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.018452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.018483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.018839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.018872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.019245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.019276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.019614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.019655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.020044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.020075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.020434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.020468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.020823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.020855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.021209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.021242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.021598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.021629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.021999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.022031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.022390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.022423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.022777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.022811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.023134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.023164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.023512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.023543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.023897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.023931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.024294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.004 [2024-12-06 17:47:52.024324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.004 qpair failed and we were unable to recover it. 00:32:00.004 [2024-12-06 17:47:52.024689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.024722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.025105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.025136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.025495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.025527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.025892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.025924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.026280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.026314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.026661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.026694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.027049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.027081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.027431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.027463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.027824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.027857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.028218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.028251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.028593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.028624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.029018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.029050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.029409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.029442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.029799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.030160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.030191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.030543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.030944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.030978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.031340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.031371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.031714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.031745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.032109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.032139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.032490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.032523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.032862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.032894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.033260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.033293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.033656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.033689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.034041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.034433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.034465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.005 qpair failed and we were unable to recover it. 00:32:00.005 [2024-12-06 17:47:52.034867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.005 [2024-12-06 17:47:52.034901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.035246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.035277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.035633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.035678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.035931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.036336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.036369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.036722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.036753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.037020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.037051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.037415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.037448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.037788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.037820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.038105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.038135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.038490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.038523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.038771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.038804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.039182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.039213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.039573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.039606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.039965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.039998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.040368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.040399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.040760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.040791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.041150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.041181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.041573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.041828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.041860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.042248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.042280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.042636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.042686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.042946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.042981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.043357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.043389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.043744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.043785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.044137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.044169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.044519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.044551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.044915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.044947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.045299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.045329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.045681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.045712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.046068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.046099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.046459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.046492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.046880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.046917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.047287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.047319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.047695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.006 [2024-12-06 17:47:52.047727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.006 qpair failed and we were unable to recover it. 00:32:00.006 [2024-12-06 17:47:52.048101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.048134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.048499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.048533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.048890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.048921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.049277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.049308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.049674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.049707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.050106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.050137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.050492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.050526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.050918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.050950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.051343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.051375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.051727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.051760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.052127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.052158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.052516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.052550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.278 qpair failed and we were unable to recover it. 00:32:00.278 [2024-12-06 17:47:52.052908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.278 [2024-12-06 17:47:52.052941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.053296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.053329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.053685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.053718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.054083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.054115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.054477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.054516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.054877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.054910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.055260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.055293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.055658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.055693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.056040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.056072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.056463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.056495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.056857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.056890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.057246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.057278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.057636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.057678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.058029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.058060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.058431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.058462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.058823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.058857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.059206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.059237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.059610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.059665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.060032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.060064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.060420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.060454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.060806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.060840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.061243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.061627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.061669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.062019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.062049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.062402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.062434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.062791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.062823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.063184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.063218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.063468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.063499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.063872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.063906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.064140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.064171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.064538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.064569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.064924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.064964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.065355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.065387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.065750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.065782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.066142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.066174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.066577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.066608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.067009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.067042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.067391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.067424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.279 [2024-12-06 17:47:52.067790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.279 [2024-12-06 17:47:52.067823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.279 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.068179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.068212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.068568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.068598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.068992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.069025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.069382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.069414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.069772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.069805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.070162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.070195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.070551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.070583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.070939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.070972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.071334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.071365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.071732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.071765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.072127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.072158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.072512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.072544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.072906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.072940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.073298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.073330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.073684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.073718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.074071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.074103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.074461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.074493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.074868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.074899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.075335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.075367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.075724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.075759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.076028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.076059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.076445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.076478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.076906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.076938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.077290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.077324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.077525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.077558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.077929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.077963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.078316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.078348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.078712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.078744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.079099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.079131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.079490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.079522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.079887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.079919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.080277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.080310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.080664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.080695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.081124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.081157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.081507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.081538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.081894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.081926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.082281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.082312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.082679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.280 [2024-12-06 17:47:52.082711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.280 qpair failed and we were unable to recover it. 00:32:00.280 [2024-12-06 17:47:52.083069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.083102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.083458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.083490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.083823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.083856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.084211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.084243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.084604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.084636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.085005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.085039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.085402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.085432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.085792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.085827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.086187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.086219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.086554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.086585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.086977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.087009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.087365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.087399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.087753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.087786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.088134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.088167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.088419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.088450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.088842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.088875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.089107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.089139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.089514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.089545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.089906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.089939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.090297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.090328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.090673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.090706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.091081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.091112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.091353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.091394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.091752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.091785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.092129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.092161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.092519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.092550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.092914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.092948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.093304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.093336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.093719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.093754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.093982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.094015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.094255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.094286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.094647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.094680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.095026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.095057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.095414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.095447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.095803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.095836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.096194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.096227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.096593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.096624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.096991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.097026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.281 [2024-12-06 17:47:52.097384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.281 [2024-12-06 17:47:52.097416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.281 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.097784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.097818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.098177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.098208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.098445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.098476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.098840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.098872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.099230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.099263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.099622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.099663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.100015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.100046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.100410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.100442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.100668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.100703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.101054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.101087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.101520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.101558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.101909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.101943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.102369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.102401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.102632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.102677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.103037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.103069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.103426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.103458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.103816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.103850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.104195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.104228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.104658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.104690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.105044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.105077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.105431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.105462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.105842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.105876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.106232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.106264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.106611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.106650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.106914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.106948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.107305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.107338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.107693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.107725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.108085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.108116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.108351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.108384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.108745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.108777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.109222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.109253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.109611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.109656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.110040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.110071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.110428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.110460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.110818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.110851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.111212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.111244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.111600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.282 [2024-12-06 17:47:52.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.282 [2024-12-06 17:47:52.112034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.282 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.112383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.112416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.112679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.112714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.113055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.113087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.113426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.113458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.113812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.113845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.114187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.114219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.114580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.114614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.115004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.115036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.115397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.115787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.115820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.116175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.116208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.116576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.116607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.116844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.116876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.117242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.117273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.117635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.117675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.118036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.118067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.118414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.118445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.118823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.118855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.119425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.119463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.119823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.119864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.120216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.120247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.120612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.120658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.121036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.121068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.121420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.121452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.121818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.121850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.122307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.122664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.122697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.123109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.123141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.123495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.123528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.123890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.123922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.124302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.124334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.283 [2024-12-06 17:47:52.124722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.283 qpair failed and we were unable to recover it. 00:32:00.283 [2024-12-06 17:47:52.125092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.125122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.125483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.125516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.125879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.125911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.126273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.126305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.126668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.126700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.127055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.127086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.127439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.127471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.127828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.127861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.128219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.128257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.128606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.129041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.129073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.129433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.129465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.129815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.129848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.130102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.130132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.130483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.130513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.130875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.130907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.131262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.131294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.131657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.131690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.132043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.132073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.132432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.132465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.132834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.132866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.133220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.133250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.133661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.133694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.134056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.134089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.134452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.134483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.134846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.134883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.135233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.135264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.135616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.135663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.136041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.136072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.136437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.136470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.136821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.136853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.137208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.137240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.137594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.137625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.137981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.138014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.138365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.138396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.138799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.138838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.284 [2024-12-06 17:47:52.139187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.284 [2024-12-06 17:47:52.139221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.284 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.139576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.139608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.139843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.139875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.140232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.140263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.140619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.140661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.141056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.141087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.141438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.141470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.141844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.141877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.142239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.142273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.142629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.142672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.142948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.142978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.143331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.143362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.143701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.143734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.144142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.144173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.144600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.144631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.144994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.145026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.145389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.145422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.145785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.145817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.146170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.146203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.146555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.146586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.146946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.146978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.147335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.147367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.147731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.147765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.148113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.148146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.148567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.148597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.148960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.149015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.149435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.149477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.149858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.149893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.150243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.150276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.150632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.150674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.151023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.151054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.151414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.151447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.151813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.151846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.152195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.152227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.152594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.152624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.152968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.153002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.153351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.153384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.153750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.153783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.154135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.285 [2024-12-06 17:47:52.154167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.285 qpair failed and we were unable to recover it. 00:32:00.285 [2024-12-06 17:47:52.154519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.154551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.154960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.154993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.155268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.155299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.155536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.155567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.155941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.155973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.156375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.156407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.156768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.156802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.157158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.157189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.157548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.157580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.157940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.157972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.158319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.158349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.158713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.158744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.159103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.159136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.159516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.159548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.159872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.160305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.160336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.160699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.160730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.161104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.161135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.161494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.161527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.161768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.161799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.162126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.162157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.162512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.162543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.162917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.162949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.163196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.163228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.163611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.163655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.164015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.164046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.164452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.164484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.164822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.165216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.165247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.165588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.165618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.165876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.165907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.166267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.166300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.166658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.166690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.167050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.167081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.167436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.167468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.167825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.167857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.168258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.168289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.168635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.286 [2024-12-06 17:47:52.168680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.286 qpair failed and we were unable to recover it. 00:32:00.286 [2024-12-06 17:47:52.169036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.169067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.169444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.169476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.169836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.169869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.170229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.170673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.170706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.171058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.171090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.171466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.171498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.171872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.171904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.172263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.172295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.172690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.173039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.173071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.173423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.173454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.173815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.173849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.174212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.174244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.174647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.174679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.175024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.175056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.175418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.175451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.175808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.175848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.176244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.176274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.176626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.176669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.177018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.177049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.177414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.177447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.177827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.177859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.178236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.178268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.178630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.178672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.179061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.179092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.179452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.179484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.179850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.179882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.180241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.180272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.180671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.180703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.181062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.181094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.181452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.181483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.181852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.181885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.287 [2024-12-06 17:47:52.182237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.287 [2024-12-06 17:47:52.182268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.287 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.182629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.182670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.183021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.183054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.183418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.183448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.183798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.183830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.184187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.184569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.184601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.184959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.184991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.185347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.185380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.185730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.185763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.186106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.186136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.186505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.186542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.186868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.186899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.187257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.187287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.187655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.187689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.188042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.188073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.188434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.188466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.188823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.188855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.189095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.189126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.189488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.189519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.189879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.189912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.190265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.190651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.190685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.191049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.191080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.191445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.191478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.191825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.191858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.192209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.192242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.192603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.192634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.193000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.193033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.193395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.193426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.193794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.193825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.194229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.194262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.194609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.194650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.195003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.195035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.195280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.195313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.195678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.195711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.196071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.196104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.196459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.196489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.196872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.288 [2024-12-06 17:47:52.196904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.288 qpair failed and we were unable to recover it. 00:32:00.288 [2024-12-06 17:47:52.197259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.197293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.197627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.197668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.198017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.198049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.198416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.198448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.198804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.198837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.199199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.199229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.199460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.199491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.199886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.199919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.200268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.200299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.200663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.200697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.201053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.201083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.201444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.201476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.201843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.201876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.202239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.202270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.202660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.202693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.203052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.203083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.203439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.203471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.203826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.203858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.204224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.204256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.204602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.204633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.204993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.205024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.205364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.205395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.205750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.205782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.206132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.206162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.206527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.206559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.206920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.206952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.207328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.207359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.207719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.208126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.208157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.208509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.208541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.208875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.208906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.209255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.209285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.209648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.209679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.210023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.210056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.210408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.210439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.210791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.210822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.211181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.211211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.211578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.211610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.289 qpair failed and we were unable to recover it. 00:32:00.289 [2024-12-06 17:47:52.211965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.289 [2024-12-06 17:47:52.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.212353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.212383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.212765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.212803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.213163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.213193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.213553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.213584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.213940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.214358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.214614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.214655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.215061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.215091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.215338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.215368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.215731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.215764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.216120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.216152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.216509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.216540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.216903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.216936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.217295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.217328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.217727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.217759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.218127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.218159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.218527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.218559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.218942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.218975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.219194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.219225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.219599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.219632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.219979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.220011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.220371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.220402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.220767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.220799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.221163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.221196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.221431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.221461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.221852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.221884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.222241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.222272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.222634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.222675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.223037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.223074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.223441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.223474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.223817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.223850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.224098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.224129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.224494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.224524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.224898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.224930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.225312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.225344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.225717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.225749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.226121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.226153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.226509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.290 [2024-12-06 17:47:52.226540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.290 qpair failed and we were unable to recover it. 00:32:00.290 [2024-12-06 17:47:52.226901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.226934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.227295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.227327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.227701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.227732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.228095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.228125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.228490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.228521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.228896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.228930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.229149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.229181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.229559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.229589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.229988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.230020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.230379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.230411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.230800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.231174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.231206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.231454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.231485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.231825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.231858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.232250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.232612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.232657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.233014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.233045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.233415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.233453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.233863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.233896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.234252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.234285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.234695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.234728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.235096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.235129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.235490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.235522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.235896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.235928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.236282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.236315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.236539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.236571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.236952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.236986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.237216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.237248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.237611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.237656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.237937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.237968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.238347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.238379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.238768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.238801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.239165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.239557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.239590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.239953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.239987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.240345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.240377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.240731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.240762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.241136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.241168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.291 qpair failed and we were unable to recover it. 00:32:00.291 [2024-12-06 17:47:52.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.291 [2024-12-06 17:47:52.241556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.241778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.241810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.242163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.242195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.242531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.242562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.242927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.242960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.243339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.243371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.243753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.244107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.244490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.244521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.244771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.244802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.245107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.245138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.245492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.245523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.245863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.245896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.246273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.246305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.246662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.246695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.247060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.247092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.247463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.247494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.247965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.247998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.248345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.248377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.248725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.248757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.249149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.249507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.249538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.249802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.249836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.250074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.250106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.250537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.250569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.250952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.250991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.251379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.251412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.251751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.251783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.252150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.252580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.252613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.252987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.253020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.253384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.253417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.253623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.253664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.292 [2024-12-06 17:47:52.254025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.292 [2024-12-06 17:47:52.254056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.292 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.254422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.254455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.254782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.254815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.255171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.255203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.255450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.255481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.255866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.255897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.256264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.256297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.256636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.256679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.257070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.257101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.257454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.257487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.257871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.257903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.258131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.258161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.258520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.258551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.258941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.258976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.259343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.259381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.259772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.259805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.260160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.260192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.260407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.260439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.260784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.260816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.261189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.261221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.261457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.261487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.261868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.261899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.262143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.262173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.262522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.262552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.262902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.262934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.263306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.263336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.263702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.263735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.264106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.264137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.264381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.264412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.264764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.264797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.265063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.265094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.265337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.265367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.265773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.265805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.266158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.266190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.266554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.266585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.266951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.266984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.267337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.267368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.267617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.267672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.293 [2024-12-06 17:47:52.268060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.293 [2024-12-06 17:47:52.268092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.293 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.268340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.268371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.268736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.268768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.269139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.269178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.269536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.269567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.269974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.270007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.270332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.270365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.270714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.270746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.271057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.271087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.271466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.271496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.271910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.271943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.272295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.272324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.272691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.272723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.273082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.273115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.273474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.273504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.273874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.273907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.274139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.274170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.274538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.274570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.274977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.275009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.275360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.275392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.275615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.275670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.275882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.275912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.276274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.276305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.276663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.276696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.277056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.277086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.277452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.277482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.277851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.277883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.278241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.278273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.278628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.278672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.279029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.279061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.279422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.279455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.279801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.279833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.280063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.280455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.280486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.280855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.280889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.281233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.281264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.281619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.281660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.282037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.282068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.294 [2024-12-06 17:47:52.282470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.294 [2024-12-06 17:47:52.282502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.294 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.282835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.282866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.283232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.283263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.283625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.283668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.284038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.284069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.284426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.284458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.284807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.284839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.285209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.285240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.285602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.285634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.286005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.286036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.286411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.286442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.286817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.286852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.287203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.287233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.287597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.287629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.288026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.288058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.288418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.288448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.288805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.288837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.289196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.289228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.289584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.289615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.289990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.290022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.290380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.290413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.290769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.290802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.291155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.291186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.291547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.291578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.291943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.291977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.292337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.292367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.292711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.292744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.293108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.293138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.293496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.293528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.293867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.293899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.294254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.294286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.294650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.294682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.294933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.294964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.295329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.295372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.295721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.295753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.296002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.296032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.296385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.296415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.296775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.296807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.295 qpair failed and we were unable to recover it. 00:32:00.295 [2024-12-06 17:47:52.297168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.295 [2024-12-06 17:47:52.297202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.297555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.297585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.297874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.297906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.298275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.298306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.298673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.298704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.299069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.299100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.299448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.299477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.299839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.299871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.300268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.300620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.300663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.301054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.301085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.301424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.301455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.301833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.301866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.302213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.302247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.302596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.302626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.302998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.303030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.303381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.303413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.303777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.303810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.304167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.304199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.304555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.304585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.304951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.304984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.305350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.305380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.305743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.305783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.306138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.306170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.306525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.306559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.306925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.306958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.307322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.307355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.307701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.307733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.308088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.308119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.308474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.308506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.308876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.308908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.309145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.309175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.309539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.309571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.309935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.309969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.310325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.310357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.310733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.311112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.311144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.296 [2024-12-06 17:47:52.311483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.296 [2024-12-06 17:47:52.311514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.296 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.311870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.311902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.312250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.312281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.312660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.312693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.313054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.313085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.313440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.313472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.313753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.313786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.314159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.314191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.314557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.314588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.314945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.314977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.315328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.315360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.315739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.315771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.316105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.316149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.316497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.316528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.316881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.316913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.317237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.317267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.317617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.317660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.318011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.318042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.318397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.318430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.318795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.318827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.319186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.319218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.319577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.319608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.320028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.320060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.320408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.320440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.320805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.320838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.321197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.321229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.321585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.321616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.321957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.321989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.322344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.322375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.322727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.322759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.323124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.323154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.323525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.323558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.323917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.323949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.324305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.324337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.297 [2024-12-06 17:47:52.324698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.297 [2024-12-06 17:47:52.324729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.297 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.325093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.325123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.325479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.325510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.325863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.325897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.326244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.326274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.326630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.326674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.327021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.327053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.327410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.327443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.327801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.327835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.328127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.328158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.328537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.328568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.328917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.328950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.329304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.329335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.329704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.329737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.330094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.330125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.330474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.330506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.330757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.330789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.331145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.331176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.331543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.331575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.331985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.332018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.332384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.298 [2024-12-06 17:47:52.332671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.298 [2024-12-06 17:47:52.332708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.298 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.333090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.571 [2024-12-06 17:47:52.333125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.571 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.333472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.571 [2024-12-06 17:47:52.333504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.571 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.333875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.571 [2024-12-06 17:47:52.333910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.571 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.334261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.571 [2024-12-06 17:47:52.334293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.571 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.334665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.571 [2024-12-06 17:47:52.334699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.571 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.335098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.571 [2024-12-06 17:47:52.335131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.571 qpair failed and we were unable to recover it. 00:32:00.571 [2024-12-06 17:47:52.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.335521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.335867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.335900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.336257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.336290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.336655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.336689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.336915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.336949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.337309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.337341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.337691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.337724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.337954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.337988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.338364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.338395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.338755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.338787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.339146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.339178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.339398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.339433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.339784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.339818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.340175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.340207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.340570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.340601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.340941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.340975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.341333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.341366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.341792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.341825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.342178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.342217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.342566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.342597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.342956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.342990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.343360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.343393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.343791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.344177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.344210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.344454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.344484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.344844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.344877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.345313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.345343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.345701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.345732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.345991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.346023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.346377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.346409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.346769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.346802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.347164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.347196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.347566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.347597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.347985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.348018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.348384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.348416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.348848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.348881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.349233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.349265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.572 qpair failed and we were unable to recover it. 00:32:00.572 [2024-12-06 17:47:52.349633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.572 [2024-12-06 17:47:52.349676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.350019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.350051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.350420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.350450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.350807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.350838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.351195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.351225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.351577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.351609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.352025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.352056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.352429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.352461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.352821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.352858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.353281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.353312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.353668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.353701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.354061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.354092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.354538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.354570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.354923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.354956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.355307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.355340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.355697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.355730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.356088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.356120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.356506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.356891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.356924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.357277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.357309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.357677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.357709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.358187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.358420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.358454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.358799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.358833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.359189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.359222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.359463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.359495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.359867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.359899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.360258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.360289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.360544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.360575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.360917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.360949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.361310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.361343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.361731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.362088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.362120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.362517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.362549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.362894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.362926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.363289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.363321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.363678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.363710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.363941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.363975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.573 [2024-12-06 17:47:52.364323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.573 [2024-12-06 17:47:52.364354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.573 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.364710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.364742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.365141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.365172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.365522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.365553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.365918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.365950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.366300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.366330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.366701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.366734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.367097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.367127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.367487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.367520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.367883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.367916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.368269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.368303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.368657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.368689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.369125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.369158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.369509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.369542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.369910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.369942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.370308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.370341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.370575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.370611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.371016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.371049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.371476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.371824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.371858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.372291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.372322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.372671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.372704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.373052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.373085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.373440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.373472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.373816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.373848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.374207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.374239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.374598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.374629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.374993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.375025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.375462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.375493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.375851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.375883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.376244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.376622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.376663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.376982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.377012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.377374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.377406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.377661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.377695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.378047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.378078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.378444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.378475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.378825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.378857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.379216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.574 [2024-12-06 17:47:52.379253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.574 qpair failed and we were unable to recover it. 00:32:00.574 [2024-12-06 17:47:52.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.379660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.380030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.380062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.380420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.380453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.380816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.380850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.381206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.381238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.381470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.381502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.381886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.381917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.382277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.382310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.382668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.382700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.383069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.383101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.383450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.383483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.383850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.383881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.384239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.384271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.384627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.384669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.385082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.385423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.385455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.385802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.385836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.386196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.386228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.386583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.386617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.386987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.387020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.387380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.387413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.387790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.387823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.388170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.388204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.388557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.388587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.388833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.388865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.389217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.389250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.389606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.389656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.390006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.390038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.390394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.390427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.390700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.390733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.391078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.391110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.391478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.391510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.391873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.391906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.392268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.392301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.392661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.392694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.393072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.393103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.393507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.393538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.575 [2024-12-06 17:47:52.393894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.575 [2024-12-06 17:47:52.393927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.575 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.394288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.394319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.394678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.394712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.395111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.395142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.395494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.395526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.395890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.396325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.396356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.396704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.396739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.397100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.397130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.397486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.397517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.397886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.397917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.398279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.398310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.398675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.398708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.399061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.399094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.399453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.399483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.399846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.399881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.400247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.400283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.400634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.401029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.401060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.401395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.401425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.401779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.401810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.402170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.402203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.402546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.402578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.402964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.402997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.403349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.403381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.403742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.403776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.404135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.404168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.404523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.404554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.404922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.405301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.405335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.405693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.405726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.406077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.406110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.406483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.406514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.576 qpair failed and we were unable to recover it. 00:32:00.576 [2024-12-06 17:47:52.406881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.576 [2024-12-06 17:47:52.406915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.407307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.407341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.407694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.407727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.408129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.408160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.408518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.408551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.408910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.408945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.409302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.409334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.409677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.409710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.410068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.410102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.410457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.410488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.410854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.411226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.411259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.411613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.411664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.412021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.412052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.412405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.412437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.412786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.412818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.413176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.413207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.413452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.413483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.413829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.413860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.414219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.414250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.414615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.414656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.414997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.415028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.415419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.415782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.415814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.416211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.416558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.416589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.416983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.417016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.417370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.417403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.417752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.417786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.418157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.418188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.418538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.418571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.418933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.418965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.419322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.419355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.419714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.419746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.420004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.420035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.420384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.420415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.420772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.420807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.421202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.421233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.421584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.577 [2024-12-06 17:47:52.421617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.577 qpair failed and we were unable to recover it. 00:32:00.577 [2024-12-06 17:47:52.421979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.422010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.422371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.422403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.422759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.422791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.423158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.423189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.423545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.423578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.423946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.423979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.424334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.424364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.424726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.424760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.425119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.425151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.425511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.425542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.425906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.425939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.426289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.426321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.426659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.427039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.427070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.427432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.427463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.427816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.427851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.428210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.428241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.428606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.428648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.429034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.429066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.429429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.429462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.429828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.429860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.430118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.430148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.430518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.430549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.430797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.430828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.431181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.431212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.431573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.431606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.432000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.432032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.432392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.432425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.432788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.432821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.433183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.433216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.433574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.433605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.433959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.433992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.434341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.434372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.434740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.434773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.435093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.435438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.435468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.435801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.435834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.436186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.578 [2024-12-06 17:47:52.436218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.578 qpair failed and we were unable to recover it. 00:32:00.578 [2024-12-06 17:47:52.436578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.436609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.436977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.437017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.437393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.437424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.437823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.437856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.438216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.438250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.438619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.438661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.439014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.439045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.439401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.439434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.439793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.439826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.440171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.440205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.440569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.440599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.440989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.441023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.441389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.441421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.441784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.441816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.442187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.442217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.442575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.442608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.442978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.443010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.443362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.443394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.443753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.443785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.444148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.444180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.444508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.444538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.444868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.444900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.445251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.445283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.445658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.445692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.446044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.446074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.446436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.446469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.446830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.446863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.447274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.447606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.447654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.448044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.448075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.448419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.448452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.448811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.448844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.449241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.449272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.449518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.449548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.449974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.450333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.450367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.450723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.450755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.451116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.451149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.579 [2024-12-06 17:47:52.451501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.579 [2024-12-06 17:47:52.451532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.579 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.451888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.451922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.452305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.452673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.452705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.453117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.453149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.453500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.453532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.453938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.453971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.454331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.454364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1735066 Killed "${NVMF_APP[@]}" "$@" 00:32:00.580 [2024-12-06 17:47:52.454765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.454798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.455163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.455194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:00.580 [2024-12-06 17:47:52.455549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.455581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:00.580 [2024-12-06 17:47:52.455976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.456007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.580 [2024-12-06 17:47:52.456369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.580 [2024-12-06 17:47:52.456403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:00.580 [2024-12-06 17:47:52.456764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.456797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.457151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.457183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.457571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.457942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.457977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.458332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.458363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.458737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.458776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.459142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.459174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.459544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.459575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.459840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.459871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.460228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.460261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.460625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.460671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.461071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.461105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.461457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.461488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.461864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.461899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.462247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.462279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.462649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.462685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.463063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.463097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.463341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.463373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.463774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.463809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.464163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.464197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.464540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.464574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.580 qpair failed and we were unable to recover it. 00:32:00.580 [2024-12-06 17:47:52.464844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.580 [2024-12-06 17:47:52.464878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1735153 00:32:00.581 [2024-12-06 17:47:52.465341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.465375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1735153 00:32:00.581 [2024-12-06 17:47:52.465718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.465754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1735153 ']' 00:32:00.581 [2024-12-06 17:47:52.466140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.466177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.466427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.581 [2024-12-06 17:47:52.466460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.466687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.581 [2024-12-06 17:47:52.466746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.581 [2024-12-06 17:47:52.467118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.467150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 17:47:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.467516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.467548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.467894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.467930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.468295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.468329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.468568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.468600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.468966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.469002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.469348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.469381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.469763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.469804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.470163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.470196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.470552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.470587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.470913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.470948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.471239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.471273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.471599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.471632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.472019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.472053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.472402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.472436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.472789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.472825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.473217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.473251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.473586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.473619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.473904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.473939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.474310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.474345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.474709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.474745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.475127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.475161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.475396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.475429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.475792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.475826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.476190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.476231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.476514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.581 [2024-12-06 17:47:52.476547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.581 qpair failed and we were unable to recover it. 00:32:00.581 [2024-12-06 17:47:52.476896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.476932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.477309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.477343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.477719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.477753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.478137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.478170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.478524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.478558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.478871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.478905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.479148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.479181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.479556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.479590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.479999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.480032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.480411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.480443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.480827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.480859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.481225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.481258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.481539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.481570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.481903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.481938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.482301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.482334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.482695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.482728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.483023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.483054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.483437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.483469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.483825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.483861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.484123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.484155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.484399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.484430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.484690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.484724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.484871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.484899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.485182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.485214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.485468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.485499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.485776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.485807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.486157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.486189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.486550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.486583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.486844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.486878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.487259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.487291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.487677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.487712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.487873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.487906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.488275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.488308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.488541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.488573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.488978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.489012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.489356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.489388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.489764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.489797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.490058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.490090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.582 qpair failed and we were unable to recover it. 00:32:00.582 [2024-12-06 17:47:52.490458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.582 [2024-12-06 17:47:52.490491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.490956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.490990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.491410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.491443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.491825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.492217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.492250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.492623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.492670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.492893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.492925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.493288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.493319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.493700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.493734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.493986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.494017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.494382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.494415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.494784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.494818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.495191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.495223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.495593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.495624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.496045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.496078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.496462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.496496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.496893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.496927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.497348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.497381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.497737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.497771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.498163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.498197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.498579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.498611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.498889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.498921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.499165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.499197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.499579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.499613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.500020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.500055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.500435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.500468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.500826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.500861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.501137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.501169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.501533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.501570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.501812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.501847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.502212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.502244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.502615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.502660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.503098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.503130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.503387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.503754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.503787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.504185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.504217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.504437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.504469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.504902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.504934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.505300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.583 [2024-12-06 17:47:52.505333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.583 qpair failed and we were unable to recover it. 00:32:00.583 [2024-12-06 17:47:52.505712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.505745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.506166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.506197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.506563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.506597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.507044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.507416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.507448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.507843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.507876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.508234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.508264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.508676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.508708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.508961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.508992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.509313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.509730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.509764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.510184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.510215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.510685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.510718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.511086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.511119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.511375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.511407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.511522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.511551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Write completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 Read completed with error (sct=0, sc=8) 00:32:00.584 starting I/O failed 00:32:00.584 [2024-12-06 17:47:52.512384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:00.584 [2024-12-06 17:47:52.512947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.513070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.513562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.513602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.513988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.514097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.514501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.514540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.514809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.514845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.515206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.515237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.515622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.515666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.516065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.516098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.516469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.516503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.516871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.516903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.517274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.517307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.584 qpair failed and we were unable to recover it. 00:32:00.584 [2024-12-06 17:47:52.517704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.584 [2024-12-06 17:47:52.517736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.518124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.518158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.518547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.518578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.518926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.518962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.519216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.519247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.519636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.519678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.520058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.520091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.520314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.520347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.520728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.520761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.521084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.521116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.521356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.521387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.521649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.521681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.522056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.522086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.522322] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:32:00.585 [2024-12-06 17:47:52.522379] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.585 [2024-12-06 17:47:52.522462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.522493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.522732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.522763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.523091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.523121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.523379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.523410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.523776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.523812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.524086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.524118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.524480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.524513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.524747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.524781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.525031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.525076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.525462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.525494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.525872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.525907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.526276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.526310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.526723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.526757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.527140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.527172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.527540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.527572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.527949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.527982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.528350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.528383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.528728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.528761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.529014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.529054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.585 [2024-12-06 17:47:52.529469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.585 [2024-12-06 17:47:52.529498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.585 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.529880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.529911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.530260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.530292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.530723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.530754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.530992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.531025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.531281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.531311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.531693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.531724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.531969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.531998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.532387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.532417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.532671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.532701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.532940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.532970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.533344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.533372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.533673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.533704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.534052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.534084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.534468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.534497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.534884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.534915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.535285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.535313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.535668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.535699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.536038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.536069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.536306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.536335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.536587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.536616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.536802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.536832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.537205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.537235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.537496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.537525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.537902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.537933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.538282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.538311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.538716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.539093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.539124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.539475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.539506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.539740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.539778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.540131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.540162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.540508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.540537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.540802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.540836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.541212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.541242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.541490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.541519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.541897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.541927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.542297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.542327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.542691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.542722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.586 [2024-12-06 17:47:52.543100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.586 [2024-12-06 17:47:52.543131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.586 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.543503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.543533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.543794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.543824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.544194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.544223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.544593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.544622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.545005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.545035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.545399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.545430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.545796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.545827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.546200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.546231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.546491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.546520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.546788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.546818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.547246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.547278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.547682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.547716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.548083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.548113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.548367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.548396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.548751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.548782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.549155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.549185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.549573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.549602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.549991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.550023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.550437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.550791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.550822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.551209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.551239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.551490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.551520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.551783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.551816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.552216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.552247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.552626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.552666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.553065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.553443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.553473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.553849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.553880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.554254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.554284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.554671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.554701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.555089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.555132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.555516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.555546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.555937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.555968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.556341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.556370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.556627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.556669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.556942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.556972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.557193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.557223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.587 [2024-12-06 17:47:52.557575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.587 [2024-12-06 17:47:52.557611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.587 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.557850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.557880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.558270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.558299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.558562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.558591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.558839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.558870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.559098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.559489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.559520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.559977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.560009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.560399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.560429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.560739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.560768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.561046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.561076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.561343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.561775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.561807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.562268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.562297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.562685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.562715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.563082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.563111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.563390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.563419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.563787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.563818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.564201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.564230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.564612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.564651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.565039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.565068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.565466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.565495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.565907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.565938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.566168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.566197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.566489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.566895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.566926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.567287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.567317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.567686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.567717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.568117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.568489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.568518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.568929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.568960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.569321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.569352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.569615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.569651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.570072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.570108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.570486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.570517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.570978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.571008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.571245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.571275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.571475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.571506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.588 [2024-12-06 17:47:52.571879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.588 [2024-12-06 17:47:52.571910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.588 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.572295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.572324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.572595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.572624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.572883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.573162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.573194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.573447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.573476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.573727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.573760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.574234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.574264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.574645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.574675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.575061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.575091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.575367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.575397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.575852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.575883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.576246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.576275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.576661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.576692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.576944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.576973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.577355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.577385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.577657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.577687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.577984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.578013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.578381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.578410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.578538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.578570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.578939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.578969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.579238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.579267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.579627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.579667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.580021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.580050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.580415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.580445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.580873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.580905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.581156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.581185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.581570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.581600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.581987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.582018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.582381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.582410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.582782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.582812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.583257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.583286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.583677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.583708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.584077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.584107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.584357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.584387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.584630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.584678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.585028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.585058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.585414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.585444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.589 [2024-12-06 17:47:52.585847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.589 qpair failed and we were unable to recover it. 00:32:00.589 [2024-12-06 17:47:52.586210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.586588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.586618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.586892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.586921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.587286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.587315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.587523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.587551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.587926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.587957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.588214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.588243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.588613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.588901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.588930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.589382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.589648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.589680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.590017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.590046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.590413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.590443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.590817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.590846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.591088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.591120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.591493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.591523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.591895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.591926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.592324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.592354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.592796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.592827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.593196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.593225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.593592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.593621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.593981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.594011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.594274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.594304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.594707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.594737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.595115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.595144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.595510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.595538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.595758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.595788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.596136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.596167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.596513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.596542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.596900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.596932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.597212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.597241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.597609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.597647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.597888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.590 [2024-12-06 17:47:52.597920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.590 qpair failed and we were unable to recover it. 00:32:00.590 [2024-12-06 17:47:52.598289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.598319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.598690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.598720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.598965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.598997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.599384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.599766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.599797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.600183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.600212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.600577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.600606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.600993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.601022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.601394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.601423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.601812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.601843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.602214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.602243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.602652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.602682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.603051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.603080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.603432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.603461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.603830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.603861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.604231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.604261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.604680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.605051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.605082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.605330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.605362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.605741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.605771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.606198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.606228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.606604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.606633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.606987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.607016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.607237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.607266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.607673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.607704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.608076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.608106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.608354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.608386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.608765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.608797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.609188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.609218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.609497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.609525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.609902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.609933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.610355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.610385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.610612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.610650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.610894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.610925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.611257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.611287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.611670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.611702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.612076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.612105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.612494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.591 [2024-12-06 17:47:52.612524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.591 qpair failed and we were unable to recover it. 00:32:00.591 [2024-12-06 17:47:52.612912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.612942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.613281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.613310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.613681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.613712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.614005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.614034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.614389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.614418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.614666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.614706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.615075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.615104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.615388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.615417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.615782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.615812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.616181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.616212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.616508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.616537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.616902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.616934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.617335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.617364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.617756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.617786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.618165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.618193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.618622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.618661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.619082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.619112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.619474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.619504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.619893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.619923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.620306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.620336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.620715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.620745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.620984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.621016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.621401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.621431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.621809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.621839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.622209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.622238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.622626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.622672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.623042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.623070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.623470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.623499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.623765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.623795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.624179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.624208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.624581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.624610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.592 [2024-12-06 17:47:52.624998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.592 [2024-12-06 17:47:52.625027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.592 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.625390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.625422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.625792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.625824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.626280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.626310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.626690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.626688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.866 [2024-12-06 17:47:52.626721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.627086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.627115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.627372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.627401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.627761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.627800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.628153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.628182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.628561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.628591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.628857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.628887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.629350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.629379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.629744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.629774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.630140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.630170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.630551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.630581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.630963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.630993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.631370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.631399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.631792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.631822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.632167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.632199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.632564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.632593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.632966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.632996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.633242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.633273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.633623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.633660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.633885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.633915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.634309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.634339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.634701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.634731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.635116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.866 [2024-12-06 17:47:52.635146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.866 qpair failed and we were unable to recover it. 00:32:00.866 [2024-12-06 17:47:52.635493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.635530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.635888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.635920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.636262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.636292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.636665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.636696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.637101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.637131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.637511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.637540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.637929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.637960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.638398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.638428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.638780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.638817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.639202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.639232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.639617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.639655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.640024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.640053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.640406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.640437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.640810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.640842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.641190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.641221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.641587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.641617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.641987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.642018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.642375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.642405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.642781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.642813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.643053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.643083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.643472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.643501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.643910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.643940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.644139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.644171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.644531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.644560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.644899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.644937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.645261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.645290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.645661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.645692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.646045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.646076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.646418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.646448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.646801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.646832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.647226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.647255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.647500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.647529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.647981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.648013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.648417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.648446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.648804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.648836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.649214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.649245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.649604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.649634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.867 [2024-12-06 17:47:52.650026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.867 [2024-12-06 17:47:52.650055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.867 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.650415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.650446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.650811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.650842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.651043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.651074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.651447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.651479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.651844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.651875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.652245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.652275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.652619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.652659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.653041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.653070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.653467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.653496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.653834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.653863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.654241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.654635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.654684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.655116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.655145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.655502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.655531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.655893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.655924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.656288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.656317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.656683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.656713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.657071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.657101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.657467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.657496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.657852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.657882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.658245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.658274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.658632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.658672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.658920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.658948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.659319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.659349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.659719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.659750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.660109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.660139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.660485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.660513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.660879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.660910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.661296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.661324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.661673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.661709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.662092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.662121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.662471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.662499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.662774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.663125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.663156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.663509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.664001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.664031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.664395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.664425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.868 [2024-12-06 17:47:52.664783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.868 [2024-12-06 17:47:52.664812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.868 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.665179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.665209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.665595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.665624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.665896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.665926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.666289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.666318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.666699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.666729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.667152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.667184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.667543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.667573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.667808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.667839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.668230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.668260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.668630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.668669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.669018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.669047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.669407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.669436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.669811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.669843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.670219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.670249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.670626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.670998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.671027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.671392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.671421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.671780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.671810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.672021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.672051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.672302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.672331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.672592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.672622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.673006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.673037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.673393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.673422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.673769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.673801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.674143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.674173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.674534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.674564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.674967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.674998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.675377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.675409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.675786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.675818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.676192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.676223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.676582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.676611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.677008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.677046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.677416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.677448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.677863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.677896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.678222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.678253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.678617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.678654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 qpair failed and we were unable to recover it. 00:32:00.869 [2024-12-06 17:47:52.679023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.869 [2024-12-06 17:47:52.679014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.869 [2024-12-06 17:47:52.679052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.869 [2024-12-06 17:47:52.679060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.870 [2024-12-06 17:47:52.679071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.870 [2024-12-06 17:47:52.679078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.679085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.870 [2024-12-06 17:47:52.679414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.679444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.679810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.679840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.680216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.680246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.680620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.680660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.681024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.681053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.681065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:00.870 [2024-12-06 17:47:52.681220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:00.870 [2024-12-06 17:47:52.681394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:00.870 [2024-12-06 17:47:52.681422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.681451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.681394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:00.870 [2024-12-06 17:47:52.681794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.681827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.682209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.682240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.682485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.682515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.682897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.682927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.683163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.683193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.683584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.683614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.683867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.683897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.684143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.684172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.684520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.684551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.684815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.684848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.685223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.685252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.685621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.685660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.686057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.686086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.686459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.686490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.686876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.686907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.687254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.687285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.687516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.687546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.687883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.687914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.688281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.688312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.688662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.688694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.689057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.689086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.689444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.689474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.689703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.689737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.870 qpair failed and we were unable to recover it. 00:32:00.870 [2024-12-06 17:47:52.690130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.870 [2024-12-06 17:47:52.690160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.690408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.690438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.690810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.690843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.691219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.691248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.691611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.691651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.692019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.692050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.692410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.692441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.692821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.692853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.693207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.693480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.693508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.693759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.693788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.694118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.694147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.694528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.694557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.694929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.694960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.695220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.695252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.695398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.695434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.695790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.695821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.696191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.696222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.696583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.696613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.696984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.697014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.697271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.697299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.697597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.697626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.697995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.698026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.698409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.698439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.698794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.698824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.699105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.699134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.699508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.699538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.699782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.699812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.700191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.700221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.700449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.700478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.700876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.700908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.701167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.701197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.701551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.701581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.701820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.701850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.702228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.702257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.702571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.702600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.702964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.702994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.703309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.871 qpair failed and we were unable to recover it. 00:32:00.871 [2024-12-06 17:47:52.703656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.871 [2024-12-06 17:47:52.703687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.703898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.703928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.704171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.704199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.704566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.704596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.704963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.704997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.705363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.705393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.705635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.705676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.706059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.706089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.706456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.706485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.706829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.706860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.707319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.707349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.707705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.707737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.708121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.708149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.708377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.708406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.708630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.708671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.709030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.709060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.709422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.709452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.709801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.709839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.710204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.710234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.710609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.710646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.711011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.711041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.711401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.711431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.711665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.711695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.711816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.711848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.712090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.712121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.712470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.712499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.712712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.712745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.712972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.713003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.713426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.713808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.713840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.714213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.714514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.714545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.714935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.715202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.715231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.715629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.715668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.716049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.716079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.716317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.716348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.716724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.716755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.872 qpair failed and we were unable to recover it. 00:32:00.872 [2024-12-06 17:47:52.717163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.872 [2024-12-06 17:47:52.717193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.717553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.717582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.717961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.717996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.718376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.718408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.718776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.718808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.718982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.719011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.719387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.719417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.719786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.719817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.720232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.720263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.720589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.720628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.721032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.721062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.721288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.721318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.721753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.721785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.722004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.722034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.722424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.722768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.722798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.723180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.723211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.723582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.723612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.723879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.723910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.724276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.724314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.724674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.724707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.725056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.725084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.725338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.725367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.725760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.725791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.726022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.726051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.726401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.726430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.726696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.726726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.727090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.727120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.727346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.727375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.727734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.727765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.728134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.728165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.728512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.728544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.728791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.728822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.729203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.729232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.729483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.729512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.729773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.729805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.730161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.730192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.730463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.730491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.873 qpair failed and we were unable to recover it. 00:32:00.873 [2024-12-06 17:47:52.730834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.873 [2024-12-06 17:47:52.730866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.731159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.731190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.731479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.731509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.731874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.731905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.732288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.732319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.732535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.732564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.732988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.733018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.733382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.733412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.733799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.733832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.734084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.734115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.734466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.734497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.734743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.734773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.735143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.735174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.735520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.735792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.735823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.736070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.736098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.736492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.736523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.736925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.736955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.737308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.737339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.737717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.737748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.738118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.738147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.738510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.738546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.738911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.738942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.739299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.739328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.739720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.739750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.740066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.740097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.740195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.740224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.740482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.740511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.740665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.740696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.741071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.741100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.741584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.741973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.742004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.742379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.742410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.742670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.742702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.742974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.743003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.743408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.743438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.743796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.743828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.744046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.744075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.874 qpair failed and we were unable to recover it. 00:32:00.874 [2024-12-06 17:47:52.744397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.874 [2024-12-06 17:47:52.744426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.744698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.744728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.745026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.745055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.745422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.745452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.745844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.745875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.746244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.746281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.746653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.746683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.747045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.747074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.747441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.747470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.747841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.747871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.748280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.748532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.748562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.748939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.748970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.749347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.749376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.749683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.749713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.750041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.750071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.750433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.750463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.750689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.750721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.751088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.751117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.751517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.751546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.751920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.751952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.752281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.752310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.752647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.752677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.753032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.753068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.753287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.753315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.753693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.753723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.754079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.754109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.754371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.754400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.754762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.754793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.755091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.755120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.755494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.755523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.755766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.755796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.756159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.756189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.875 qpair failed and we were unable to recover it. 00:32:00.875 [2024-12-06 17:47:52.756570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.875 [2024-12-06 17:47:52.756600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.756947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.756977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.757229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.757258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.757675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.757706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.757944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.757973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.758214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.758243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.758622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.758661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.759063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.759092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.759450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.759479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.759702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.759733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.760115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.760145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.760369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.760670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.760700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.761059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.761089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.761460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.761490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.761657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.761687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.762077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.762107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.762359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.762393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.762787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.762817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.763188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.763217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.763595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.763625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.763841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.763871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.764238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.764267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.764659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.764690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.764952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.764982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.765250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.765282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.765667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.765698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.766071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.766101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.766449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.766479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.766814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.766844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.767208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.767237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.767524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.767552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.767938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.767968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.768212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.768241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.768666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.768697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.768919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.768948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.769168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.769198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.769554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.769582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.876 qpair failed and we were unable to recover it. 00:32:00.876 [2024-12-06 17:47:52.769810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.876 [2024-12-06 17:47:52.769840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.770207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.770597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.770627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.770975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.771004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.771098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.771126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.771462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.771492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.771716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.771747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.771964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.771993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.772340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.772369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.772501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.772530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.772782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.772816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.773204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.773235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.773611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.773660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.774000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.774029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.774394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.774425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.774808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.774839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.775205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.775234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.775608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.775650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.775980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.776009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.776381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.776418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.776784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.776815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.777072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.777102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.777335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.777363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.777708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.777738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.778123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.778153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.778514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.778543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.778829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.778859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.779103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.779136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.779493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.779524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.779898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.780310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.780339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.780715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.780745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.781095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.781124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.781487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.781516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.781741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.781772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.782147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.782176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.782529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.782557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.782940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.782969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.877 qpair failed and we were unable to recover it. 00:32:00.877 [2024-12-06 17:47:52.783202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.877 [2024-12-06 17:47:52.783245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.783587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.783617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.783993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.784022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.784392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.784422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.784650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.784682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.784894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.784923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.785185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.785215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.785536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.785565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.785929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.785960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.786300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.786330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.786702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.786733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.786977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.787008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.787375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.787404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.787774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.787804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.788075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.788104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.788484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.788515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.788876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.788906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.789150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.789178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.789544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.789573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.789791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.789822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.790188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.790217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.790611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.790657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.791029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.791370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.791400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.791803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.791832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.792171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.792200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.792535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.792564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.792926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.792957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.793290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.793318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.793687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.793717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.794090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.794119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.794469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.794498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.794882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.794912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.795244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.795274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.795653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.795684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.795927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.795956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.796187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.796215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.796425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.796455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.878 [2024-12-06 17:47:52.796872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.878 [2024-12-06 17:47:52.796902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.878 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.797257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.797287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.797677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.797707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.798092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.798329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.798358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.798716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.798748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.799087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.799116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.799494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.799523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.799755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.799784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.800180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.800208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.800586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.800615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.801057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.801087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.801445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.801474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.801863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.801893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.802238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.802268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.802490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.802520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.802777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.802809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.803195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.803224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.803587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.803618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.803998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.804027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.804445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.804474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.804815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.804844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.805224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.805253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.805484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.805519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.805761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.805795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.806034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.806063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.806467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.806720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.806749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.807120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.807149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.807381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.807409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.807765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.807795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.808023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.808052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.808391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.808420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.808781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.808814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.809148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.809177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.879 [2024-12-06 17:47:52.809411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.879 [2024-12-06 17:47:52.809439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.879 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.809782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.809812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.810192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.810222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.810577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.810607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.810985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.811015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.811318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.811346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.811699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.811729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.811979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.812007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.812373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.812401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.812631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.812669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.813078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.813446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.813476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.813742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.813771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.814121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.814150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.814523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.814551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.814811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.814844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.815086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.815116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.815478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.815506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.815833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.815863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.816254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.816282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.816649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.816678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.817030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.817059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.817459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.817487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.817703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.817735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.818103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.818132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.818472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.818500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.818716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.818746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.819077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.819107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.819480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.819516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.819725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.819755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.819990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.820018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.820389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.820418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.820635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.820671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.820917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.820945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.821299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.821329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.821697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.821727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.821891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.821921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.880 [2024-12-06 17:47:52.822353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.880 [2024-12-06 17:47:52.822383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.880 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.822738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.822767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.823132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.823161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.823523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.823552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.823788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.823818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.824165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.824194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.824463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.824491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.824823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.824853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.825234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.825263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.825626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.825671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.825890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.825919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.826133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.826161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.826531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.826559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.826942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.826972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.827209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.827238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.827601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.827629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.827977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.828007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.828158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.828190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.828555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.828585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.828955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.828985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.829080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.829108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.829350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4e10 is same with the state(6) to be set 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Write completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 Read completed with error (sct=0, sc=8) 00:32:00.881 starting I/O failed 00:32:00.881 [2024-12-06 17:47:52.830320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:00.881 [2024-12-06 17:47:52.830935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.831051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.831462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.831500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.832002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.832101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.832556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.832595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.833015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.833115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.833576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.833613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.881 qpair failed and we were unable to recover it. 00:32:00.881 [2024-12-06 17:47:52.833915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.881 [2024-12-06 17:47:52.833947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.834262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.834291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.834523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.834552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.838037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.838136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.838582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.838620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.839024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.839057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.839337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.839371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.839890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.839989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.840320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.840359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.840605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.840635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.841052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.841095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.841374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.841404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.841753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.841784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.842148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.842178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.842439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.842841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.842871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.843213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.843243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.843571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.843601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.843984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.844015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.844113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.844141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.844550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.844682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.845121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.845219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.845672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.845713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.846167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.846261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.846559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.846599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.847078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.847173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.847399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.847436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.847787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.847820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.848208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.848240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.848634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.848677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.849009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.849039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.849425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.849455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.849663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.849695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.850049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.850079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.850471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.850500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.850860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.850891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.851267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.851629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.882 [2024-12-06 17:47:52.851673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.882 qpair failed and we were unable to recover it. 00:32:00.882 [2024-12-06 17:47:52.852043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.852073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.852429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.852459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.852858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.852889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.853256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.853286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.853655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.853688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.854050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.854080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.854426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.854455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.854814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.854845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.855200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.855229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.855567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.855596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.855958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.855988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.856221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.856249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.856493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.856530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.856760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.856790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.857020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.857049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.857342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.857372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.857602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.857647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.858049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.858079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.858354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.858382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.858745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.858775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.859161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.859191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.859541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.859571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.859925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.859955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.860332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.860362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.860617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.860654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.860991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.861021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.861379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.861409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.861779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.861808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.862156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.862186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.862440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.862472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.862838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.862869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.863212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.863242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.863596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.863626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.864003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.864032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.864301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.864329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.864676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.864706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.865096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.865126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.865471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.883 [2024-12-06 17:47:52.865500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.883 qpair failed and we were unable to recover it. 00:32:00.883 [2024-12-06 17:47:52.865863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.865892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.866256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.866286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.866599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.866628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.866842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.866872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.867120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.867148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.867490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.867520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.867733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.867764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.868086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.868114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.868336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.868366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.868777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.868808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.869180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.869209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.869593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.869622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.869954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.869984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.870351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.870380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.870757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.870793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.871012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.871041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.871463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.871492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.871860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.871891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.872152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.872183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.872558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.872588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.872922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.872953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.873295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.873324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.873681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.873712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.874079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.874115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.874438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.874467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.874835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.874866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.875196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.875225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.875594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.875623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.875735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.875765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.876100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.876129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.876509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.876538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.876755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.876786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.876989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.877018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.877404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.884 [2024-12-06 17:47:52.877435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.884 qpair failed and we were unable to recover it. 00:32:00.884 [2024-12-06 17:47:52.877676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.877707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.878079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.878486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.878515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.878869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.878899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.879252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.879280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.879619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.879672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.880002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.880032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.880417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.880446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.880665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.880696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.881055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.881085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.881364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.881392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.881715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.881745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.882091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.882122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.882459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.882488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.882852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.882883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.883252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.883282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.883627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.883665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.884111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.884140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.884490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.884519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.884794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.884824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.885187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.885223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.885569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.885599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.885861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.885891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.886212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.886242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.886510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.886539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.886883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.886913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.887281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.887311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.887677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.887708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.888016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.888046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.888397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.888427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.888780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.888812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.889181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.889210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.889438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.889466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.889813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.889843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.890220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.890249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.890604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.890998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.891028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.891249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.885 [2024-12-06 17:47:52.891278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.885 qpair failed and we were unable to recover it. 00:32:00.885 [2024-12-06 17:47:52.891657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.891688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.892021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.892052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.892428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.892457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.892796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.892832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.893196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.893226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.893592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.893621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.894057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.894087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.894450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.894479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.894683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.894713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.895084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.895114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.895472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.895502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.895861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.895892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.896154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.896183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.896529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.896558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.896922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.896953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.897310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.897339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.897689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.897719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.898036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.898066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.898449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.898478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.898721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.898976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.899005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.899386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.899415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.899800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.899837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.900196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.900226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.900602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.900630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.900993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.901022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.901382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.901411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.901652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.902018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.902046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.902393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.902422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.902782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.902813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.903186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.903215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.903438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.903467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.903853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.903884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.904220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.904250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.904593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.904622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.904994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.905240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.886 [2024-12-06 17:47:52.905269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.886 qpair failed and we were unable to recover it. 00:32:00.886 [2024-12-06 17:47:52.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.905676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.906024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.906054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.906409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.906438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.906798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.906828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.907060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.907088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.907457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.907486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.907845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.907874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.908240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.908624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.908664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.908980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.909009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.909414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.909784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.909814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.910169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.910198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.910438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.910471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.910818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.910849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.911185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.911214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.911603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.911632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.911853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.911882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.912266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.912295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.912684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.912714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.913063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.913092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.913333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.913362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.913573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.913601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.913821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.913857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.914194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.914231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.914594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.914623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.914849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.914878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.915210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.915239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.915590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.915620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.915981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.916011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.916357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.916387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.916764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.916795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.917037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.917066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.917406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.917435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.917806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.917836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.918199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.918228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.918550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.918578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.918789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.887 [2024-12-06 17:47:52.918819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.887 qpair failed and we were unable to recover it. 00:32:00.887 [2024-12-06 17:47:52.919146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.888 [2024-12-06 17:47:52.919176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:00.888 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.919500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.919531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.919750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.919780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.920132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.920161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.920502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.920531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.920908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.920938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.921179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.921211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.921458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.921490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.921806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.921835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.922180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.922210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.922577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.922605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.922844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.922872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.923102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.923130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.923499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.923529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.923751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.923782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.924140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.924170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.924533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.924562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.924952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.924983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.925321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.925350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.925699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.925729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.925967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.926328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.926358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.926701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.926731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.927099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.927128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.927485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.927514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.927852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.927881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.928237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.928272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.928620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.928660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.929039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.929068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.929448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.929477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.929809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.929840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.930200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.930229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.930444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.930472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.162 [2024-12-06 17:47:52.930802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.162 [2024-12-06 17:47:52.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.162 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.931062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.931090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.931446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.931474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.931797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.932186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.932216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.932597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.932626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.932840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.932869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.933215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.933244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.933629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.933669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.934032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.934062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.934281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.934310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.934624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.934665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.935009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.935039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.935264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.935296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.935672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.936009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.936038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.936407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.936436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.936686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.936719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.937067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.937096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.937479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.937508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.937876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.937907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.938135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.938164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.938386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.938414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.938522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.938553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.939072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.939168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.939565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.939602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.939966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.940002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.940329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.940359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.940886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.940981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc27c000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.941363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.941396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.941738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.941770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.942097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.942126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.942524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.942553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.942805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.943228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.943258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.943659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.943688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.943942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.944340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.944369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.163 qpair failed and we were unable to recover it. 00:32:01.163 [2024-12-06 17:47:52.944777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.163 [2024-12-06 17:47:52.944807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.945041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.945072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.945417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.945447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.945818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.945848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.946174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.946202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.946416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.946444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.946711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.946741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.946981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.947010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.947380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.947408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.947686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.947717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.948079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.948108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.948440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.948469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.948845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.948875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.949251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.949613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.949665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.949992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.950021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.950394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.950423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.950780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.950810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.951027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.951056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.951409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.951438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.951794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.951825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.952189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.952218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.952522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.952551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.952900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.952931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.953328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.953357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.953616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.954019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.954050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.954271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.954301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.954531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.954559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.954956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.954986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.955327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.955356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.955735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.955765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.956105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.956134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.956511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.956539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.956926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.956958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.957189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.957223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.957611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.164 [2024-12-06 17:47:52.957649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.164 qpair failed and we were unable to recover it. 00:32:01.164 [2024-12-06 17:47:52.957997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.958026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.958400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.958428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.958761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.958791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.959158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.959188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.959539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.959568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.959936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.959967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.960304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.960334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.960710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.960740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.960947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.960976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.961328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.961357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.961720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.961750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.962005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.962035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.962279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.962312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.962518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.962548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.962946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.962977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.963402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.963430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.963767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.963797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.964150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.964179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.964548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.964577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.964823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.964852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.965221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.965250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.965600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.965629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.966048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.966077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.966352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.966381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.966747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.966777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.966989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.967018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.967354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.967383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.967633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.967678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.968020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.968049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.968325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.968354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.968713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.968743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.969085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.969115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.969495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.969525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.969748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.969777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.969868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.969896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.970255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.970627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.970665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.165 qpair failed and we were unable to recover it. 00:32:01.165 [2024-12-06 17:47:52.971026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.165 [2024-12-06 17:47:52.971056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.971447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.971482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.971840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.971871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.972214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.972243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.972578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.972608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.972969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.972999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.973216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.973245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.973474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.973860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.973891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.974118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.974150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.974410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.974440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.974693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.974722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.975096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.975126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.975580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.975610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.975845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.975875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.976265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.976295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.976652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.976684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.977037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.977068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.977274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.977303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.977674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.977708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.977913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.977942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.978205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.978233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.978573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.978602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.979000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.979031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.979365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.979396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.979741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.979772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.980099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.980130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.980497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.980526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.980765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.980796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.981026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.981055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.981443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.981797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.166 [2024-12-06 17:47:52.981828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.166 qpair failed and we were unable to recover it. 00:32:01.166 [2024-12-06 17:47:52.982153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.982184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.982394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.982423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.982800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.982830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.983187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.983217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.983546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.983575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.983821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.983851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.984215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.984244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.984506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.984535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.984886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.984917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.985145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.985174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.985535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.985565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.985920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.985951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.986179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.986208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.986606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.986636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.987021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.987051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.987406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.987436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.987788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.987818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.988180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.988209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.988445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.988474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.988827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.988857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.989213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.989242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.989606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.989635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.990049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.990078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.990427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.990457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.990662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.990693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.990893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.990922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.991256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.991285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.991516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.991544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.991928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.991958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.992336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.992365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.992712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.992741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.992963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.992992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.993200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.993228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.993452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.993481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.993724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.993754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.994069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.994098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.994421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.994456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.994802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.167 [2024-12-06 17:47:52.994833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.167 qpair failed and we were unable to recover it. 00:32:01.167 [2024-12-06 17:47:52.995171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.995200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.995586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.995616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.995887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.995917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.996154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.996183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.996430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.996460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.996675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.996705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.997052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.997454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.997482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.997869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.997900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.998131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.998161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.998525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.998554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.998920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.998949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.999195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.999224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:52.999596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:52.999626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.000002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.000032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.000277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.000306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.000576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.000605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.000977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.001008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.001378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.001408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.001756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.001785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.002026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.002056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.002428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.002457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.002833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.002863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.002972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.003004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.003259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.003289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.003626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.003663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.003999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.004029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.004235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.004266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.004650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.004680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.005015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.005044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.005146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.005175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.005570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.005599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.006023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.006052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.006431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.006461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.006824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.006854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.007067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.007095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.007451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.007481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.168 [2024-12-06 17:47:53.007703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.168 [2024-12-06 17:47:53.007732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.168 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.008099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.008134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.008475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.008504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.008713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.008742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.009197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.009227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.009462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.009490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.009873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.009903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.010298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.010328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.010424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.010452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.010784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.010814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.011190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.011219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.011595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.011624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.011864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.011893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.012279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.012308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.012521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.012550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.012896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.012927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.013268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.013298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.013516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.013546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.013889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.013918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.014268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.014297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.014661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.014692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.015092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.015121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.015477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.015505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.015872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.015902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.016279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.016308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.016671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.016702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.017105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.017135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.017483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.017864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.017894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.018246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.018276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.018621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.018660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.019013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.019042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.019402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.019431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.019775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.019805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.020020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.020049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.020317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.020346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.020671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.020700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.021066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.021096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.169 [2024-12-06 17:47:53.021346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.169 [2024-12-06 17:47:53.021377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.169 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.021584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.021613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.021943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.021973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.022347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.022382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.022711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.022740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.023092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.023122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.023444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.023473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.023699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.023728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.024039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.024067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.024406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.024783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.024814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.025155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.025184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.025540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.025569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.025957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.025988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.026372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.026401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.026719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.026748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.027139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.027168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.027539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.027569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.027804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.027835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.028100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.028132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.028421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.028450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.028824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.028854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.029137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.029166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.029558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.029588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.029973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.030003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.030338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.030367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.030741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.030770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.031137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.031166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.031517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.031546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.031940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.031970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.032317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.032346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.032590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.032619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.032984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.033014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.033237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.033266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.033603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.033632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.033970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.034000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.034364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.034393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.034604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.034634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.170 [2024-12-06 17:47:53.035007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.170 [2024-12-06 17:47:53.035037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.170 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.035405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.035435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.035804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.035834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.036099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.036310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.036338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.036713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.036749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.037115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.037144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.037494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.037525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.037894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.037926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.038250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.038280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.038595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.038624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.039028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.039058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.039403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.039433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.039630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.039669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.040058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.040089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.040447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.040477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.040705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.040735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.041095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.041125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.041474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.041504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.041891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.041922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.042286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.042317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.042673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.042704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.043055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.043085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.043452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.043482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.043812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.043842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.044054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.044083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.044299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.044328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.044686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.044718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.045071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.045101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.045313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.045342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.045722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.045753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.046111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.046141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.046491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.046522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.046707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.171 [2024-12-06 17:47:53.046739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.171 qpair failed and we were unable to recover it. 00:32:01.171 [2024-12-06 17:47:53.047110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.047140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.047336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.047365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.047603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.047965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.047995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.048348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.048377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.048568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.048914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.048945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.049281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.049311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.049668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.049699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.050009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.050039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.050235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.050264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.050588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.050623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.050839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.050869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.051228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.051257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.051620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.051659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.052015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.052045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.052418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.052448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.052797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.052828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.053186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.053217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.053557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.053587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.053934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.053966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.054326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.054356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.054709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.054940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.054969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.055320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.055351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.055689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.055722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.055954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.055983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.056314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.056344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.056703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.056948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.056977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.057187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.057218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.057554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.057583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.057947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.057978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.058338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.058368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.058706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.058737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.059093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.059122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.059473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.059503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.059881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.172 [2024-12-06 17:47:53.059912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.172 qpair failed and we were unable to recover it. 00:32:01.172 [2024-12-06 17:47:53.060248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.060279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.060482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.060513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.060858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.060890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.061251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.061283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.061655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.061685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.062023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.062053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.062261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.062290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.062540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.062572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.062926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.062957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.063309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.063339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.063691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.063722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.063937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.063967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.064314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.064344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.064721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.064759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.064957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.064986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.065332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.065703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.065735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.065968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.065997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.066330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.066360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.066707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.066739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.067070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.067100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.067305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.067335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.067687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.067718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.068087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.068117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.068519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.068549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.068753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.068784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.069153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.069182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.069526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.069556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.069802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.069832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.070190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.070220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.070554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.070585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.070948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.070978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.071201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.071230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.071595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.071624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.071976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.072006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.072334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.072364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.072717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.072748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.173 [2024-12-06 17:47:53.072945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.173 [2024-12-06 17:47:53.072975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.173 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.073325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.073355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.073706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.073737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.074101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.074132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.074466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.074496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.074701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.074731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.075113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.075143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.075340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.075368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.075731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.075761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.076072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.076102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.076475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.076504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.076897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.076928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.077318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.077348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.077587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.077618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.077864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.077895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.078219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.078251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.078598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.078635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.079003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.079034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.079281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.079314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.079556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.079589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.079957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.079988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.080373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.080403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.080741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.080773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.081109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.081139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.081344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.081373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.081591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.081621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.081962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.081991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.082207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.082237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.082445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.082474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.082861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.082891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.083269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.083299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.083670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.083702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.084086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.084465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.084496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.084882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.084915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.085133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.085164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.085520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.085550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.085923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.085954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.174 qpair failed and we were unable to recover it. 00:32:01.174 [2024-12-06 17:47:53.086206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.174 [2024-12-06 17:47:53.086236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.086597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.086626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.086999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.087029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.087227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.087257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.087466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.087497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.087802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.087832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.087942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.087975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.088505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.088598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.088914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.088953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.089152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.089184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.089540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.089570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.089933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.089964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.090162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.090192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.090456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.090487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.090844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.090876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.091253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.091284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.091621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.091660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.091979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.092010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.092351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.092394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.092592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.092862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.092892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.093122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.093151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.093506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.093536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.093918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.093952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.094149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.094178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.094538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.094936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.094967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.095349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.095380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.095722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.095753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.096106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.096137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.096548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.096889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.096921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.175 [2024-12-06 17:47:53.097284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.175 [2024-12-06 17:47:53.097315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.175 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.097536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.097565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.097929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.097960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.098163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.098193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.098543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.098573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.098780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.098810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.099173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.099203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.099573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.099603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.099948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.099981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.100336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.100367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.100725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.100757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.100984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.101014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.101365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.101395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.101722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.101754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.102114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.102144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.102478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.102508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.102855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.102887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.103091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.103121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.103469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.103499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.103864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.103895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.104242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.104271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.104480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.104509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.104865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.104895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.105248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.105278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.105634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.105674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.106102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.106132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.106365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.106400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.106745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.106776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.107148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.107177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.107438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.107471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.107811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.107842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.108075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.108105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.108461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.108491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.108802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.108834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.109070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.109099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.109459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.109489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.109845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.109876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.110239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.176 [2024-12-06 17:47:53.110269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.176 qpair failed and we were unable to recover it. 00:32:01.176 [2024-12-06 17:47:53.110612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.110650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.110742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.110772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.111120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.111151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.111498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.111529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.111884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.111914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.112257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.112288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.112659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.112691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.113031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.113061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.113397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.113428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.113633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.113672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.114012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.114042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.114378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.114409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.114671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.114702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.115040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.115071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.115442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.115472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.115858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.115891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.116193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.116223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.116567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.116597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.116980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.117012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.117369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.117400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.117766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.118015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.118045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.118397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.118431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.118772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.118804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.119134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.119167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.119526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.119557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.119908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.119938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.120288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.120320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.120526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.120563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.120965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.120996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.121343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.121374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.121719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.121750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.122126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.122157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.122493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.122523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.122868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.122900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.123235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.123265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.123590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.123619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.123955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.123986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.177 qpair failed and we were unable to recover it. 00:32:01.177 [2024-12-06 17:47:53.124318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.177 [2024-12-06 17:47:53.124347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.124697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.124729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.125084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.125114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.125449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.125479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.125739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.125770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.125999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.126032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.126358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.126389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.126735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.126766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.127068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.127101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.127273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.127304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.127658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.127689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.128018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.128048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.128272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.128301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.128514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.128543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.128843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.128874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.129213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.129244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.129595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.129625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.129965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.129997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.130218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.130248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.130593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.130622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.130840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.130870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.131218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.131248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.131608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.131645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.131979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.132010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.132372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.132404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.132743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.132774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.133115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.133145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.133493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.133523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.133728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.133758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.134098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.134128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.134521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.134842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.134875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.135206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.135237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.135587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.135617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.135954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.135984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.136327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.136358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.136717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.136748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.137100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.137130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.178 qpair failed and we were unable to recover it. 00:32:01.178 [2024-12-06 17:47:53.137336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.178 [2024-12-06 17:47:53.137366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.137718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.137749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.138133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.138163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.138518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.138548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.138901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.138932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.139278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.139308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.139658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.139689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.140026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.140056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.140407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.140437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.140794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.140824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.141184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.141214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.141581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.141613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.141925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.141956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.142325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.142355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.142748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.142779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.143135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.143165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.143524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.143554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.143889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.143921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.144266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.144297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.144657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.145037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.145067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.145284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.145314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.145676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.145707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.146055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.146085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.146427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.146458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.146799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.146830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.147175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.147205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.147414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.147443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.147636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.147676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.148027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.148056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.148406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.148436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.148784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.148815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.149028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.149063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.149439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.149469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.149768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.149800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.149995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.150025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.150367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.150397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.150605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.150634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.179 qpair failed and we were unable to recover it. 00:32:01.179 [2024-12-06 17:47:53.151002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.179 [2024-12-06 17:47:53.151033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.151385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.151416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.151750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.151781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.152089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.152120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.152418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.152447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.152679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.152709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.153042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.153072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.153326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.153356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.153728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.154117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.154147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.154490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.154520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.154783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.154812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.155123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.155153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.155499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.155529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.155756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.155786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.155986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.156016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.156535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.156629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.157088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.157126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.157483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.157515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.157981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.158074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.158446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.158484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.158840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.158886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.159239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.159271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.159508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.159538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.159899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.159931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.160280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.160311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.160689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.160720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.161065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.161096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.161437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.161467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.161673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.161705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.161949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.161978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.162334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.162363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.162711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.162743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.163085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.180 [2024-12-06 17:47:53.163116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.180 qpair failed and we were unable to recover it. 00:32:01.180 [2024-12-06 17:47:53.163484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.163514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.163857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.163889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.164234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.164263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.164468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.164498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.164856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.164887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.165236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.165266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.165652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.165683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.166039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.166069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.166429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.166460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.166811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.166843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.167193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.167222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.167554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.167584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.167938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.167970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.168320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.168349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.168702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.168739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.168931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.168961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.169331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.169361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.169697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.169729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.170085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.170116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.170453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.170483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.170862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.170893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.171187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.171217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.171562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.171592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.171942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.171973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.172170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.172199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.172401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.172432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.172802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.172833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.173180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.173210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.173559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.173590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.173933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.173965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.174180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.174209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.174299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.174329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.174678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.174710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.175062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.175093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.175431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.175462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.175808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.175838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.176063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.176093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.176400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.176430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.181 [2024-12-06 17:47:53.176786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.181 [2024-12-06 17:47:53.176816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.181 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.177067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.177415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.177445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.177778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.177816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.178031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.178061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.178409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.178440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.178757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.178790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.179109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.179140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.179347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.179376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.179720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.179752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.180121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.180151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.180487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.180518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.180876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.180908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.181260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.181290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.181513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.181542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.181879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.181911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.182161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.182191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.182586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.182617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.182957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.182988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.183336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.183367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.183712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.183744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.184101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.184498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.184528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.184892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.184923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.185271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.185301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.185620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.185657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.186006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.186037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.186379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.186408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.186777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.186808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.187162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.187192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.187541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.187571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.187809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.187840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.188194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.188224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.188573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.188604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.188818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.188848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.189172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.189202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.189421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.189450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.189781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.189812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.190030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.182 [2024-12-06 17:47:53.190060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.182 qpair failed and we were unable to recover it. 00:32:01.182 [2024-12-06 17:47:53.190399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.190429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.190776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.190807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.191172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.191202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.191549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.191578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.191935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.191967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.192183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.192225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.192586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.192616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.192977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.193008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.193225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.193254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.193455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.193484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.193842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.193874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.194219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.194248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.194453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.194482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.194859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.194891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.194979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.195008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.195352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.195382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.195631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.195673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.196029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.196059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.196406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.196436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.196783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.196815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.197174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.197204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.197428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.197457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.197787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.197818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.198152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.198182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.198538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.198567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.198788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.198819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.199177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.199206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.199415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.199444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.199795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.199826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.200175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.200204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.200554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.200584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.200922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.200955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.201315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.201350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.201706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.201738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.202088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.202119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.202470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.202500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.202746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.202776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.183 [2024-12-06 17:47:53.202999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.183 [2024-12-06 17:47:53.203029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.183 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.203372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.203401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.203733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.203764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.204160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.204504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.204534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.204877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.204908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.205257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.205286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.205500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.205530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.205884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.205914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.206275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.206306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.206625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.206675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.207025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.207055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.207425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.207455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.207768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.207798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.208128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.208158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.208508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.208538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.208891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.208922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.209276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.209305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.209656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.209688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.210024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.210054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.210429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.210459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.210669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.210700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.211057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.211092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.211432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.211462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.211700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.211730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.212084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.212114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.212455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.212484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.212862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.212894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.213287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.213317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.184 [2024-12-06 17:47:53.213662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.184 [2024-12-06 17:47:53.213693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.184 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.214048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.214079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.214422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.214452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.214545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.214573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.215123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.215213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.215621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.215680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.216078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.216111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.216457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.216488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.216909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.216998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc280000b90 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.217215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.217247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.217621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.217659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.217888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.217917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.218294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.218324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.218675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.218708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.219029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.219060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.219435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.219464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.219827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.219858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.220207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.220237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.220620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.220658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.220995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.221026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.221381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.461 [2024-12-06 17:47:53.221417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.461 qpair failed and we were unable to recover it. 00:32:01.461 [2024-12-06 17:47:53.221786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.221816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.222159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.222189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.222521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.222550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.222870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.222902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.223018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.223047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.223381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.223410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.223795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.223826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.224049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.224079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.224426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.224455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.224688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.224718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.225063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.225094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.225448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.225477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.225855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.225886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.226106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.226137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.226512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.226542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.226764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.226794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.227005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.227035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.227380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.227410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.227781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.227812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.228033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.228064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.228425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.228455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.228809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.228840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.229186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.229216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.229561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.229591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.229789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.229821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.230214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.230245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.230595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.230625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.230960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.462 [2024-12-06 17:47:53.230992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.462 qpair failed and we were unable to recover it. 00:32:01.462 [2024-12-06 17:47:53.231198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.231227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.231582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.231612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.231949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.231980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.232329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.232359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.232593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.232623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.232990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.233022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.233231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.233261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.233625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.233664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.234073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.234302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.234330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.234699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.234731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.234945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.234975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.235259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.235290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.235601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.235632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.235870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.235900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.236117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.236146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.236370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.236400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.236732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.236763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.236963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.236992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.237222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.237252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.237524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.237559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.237941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.237972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.238181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.238210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.238559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.238589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.238999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.239223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.239252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.239505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.239536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.463 [2024-12-06 17:47:53.239622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.463 [2024-12-06 17:47:53.239674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.463 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.240137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.240230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.240612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.240669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.240794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.240837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.241130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.241165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.241367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.241397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.241497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.241526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.241747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.241778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.241985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.242016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.242261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.242293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.242660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.242691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.242924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.242954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc288000b90 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.243296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.243329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.243693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.243724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.243957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.243987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.244293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.244323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.244657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.244688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.245053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.245084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.245392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.245422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.245777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.245807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.246185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.246215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.246578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.246608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.246825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.246855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.247206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.247237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.247570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.247600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.247952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.247984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.248329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.248360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.248683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.248714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.249059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.249089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.249289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.249319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.464 [2024-12-06 17:47:53.249552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.464 [2024-12-06 17:47:53.249586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.464 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.249933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.249964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.250311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.250342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.250688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.250719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.251046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.251075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.251285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.251314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.251669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.251700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.252076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.252106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.252327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.252359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.252743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.252781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.252975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.253005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.253348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.253378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.253689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.253719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.254070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.254100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.254433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.254463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.254704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.254735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.255085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.255115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.255487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.255516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.255770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.255803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.256131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.256161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.256395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.256426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.256782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.256813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.257181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.257211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.257577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.257607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.257953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.257984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.258337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.258366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.258707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.258738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.259100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.259129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.259486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.259517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.465 qpair failed and we were unable to recover it. 00:32:01.465 [2024-12-06 17:47:53.259848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.465 [2024-12-06 17:47:53.259878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.260233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.260263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.260582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.260612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.260967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.260997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.261205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.261234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.261569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.261600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.261976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.262008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.262363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.262399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.262744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.262776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.262984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.263012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.263244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.263274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.263468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.263499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.263830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.263862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.264225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.264255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.264475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.264504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.264833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.264864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.265092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.265122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.265458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.265489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.265727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.265757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.265984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.266013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.266341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.266370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.266622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.266662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.266968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.266999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.267350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.267380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.267708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.267739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.268088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.268119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.268442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.268473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.268806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.268837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.269073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.269104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.269441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.269470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.466 [2024-12-06 17:47:53.269731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.466 [2024-12-06 17:47:53.269762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.466 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.270152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.270494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.270524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.270880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.270911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.271264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.271304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.271663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.271695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.272033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.272064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.272409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.272439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.272652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.272682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.273011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.273042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.273387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.273750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.273781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.274133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.274163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.274383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.274412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.274645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.274677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.274972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.275003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.275246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.275275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.275632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.275688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.276043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.276073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.276324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.276656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.276687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.276915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.276945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.277286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.277317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.277655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.467 [2024-12-06 17:47:53.277686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.467 qpair failed and we were unable to recover it. 00:32:01.467 [2024-12-06 17:47:53.277776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.277806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.278131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.278161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.278514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.278545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.278881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.278912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.279257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.279713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.280094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.280124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.280474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.280505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.280867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.280899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.281237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.281267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.281628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.281667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.282037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.282067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.282402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.282433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.282780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.282811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.283149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.283180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.283398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.283428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.283723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.283754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.284103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.284133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.284477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.284507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.284854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.284885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.285209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.285238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.285578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.285613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.285953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.285983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.286323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.286353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.286710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.286740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.287085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.287115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.287446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.287476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.287819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.287850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.288087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.288117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.288452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.288481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.468 qpair failed and we were unable to recover it. 00:32:01.468 [2024-12-06 17:47:53.288690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.468 [2024-12-06 17:47:53.288720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.288955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.288986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.289396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.289426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.289771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.289802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.290154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.290184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.290550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.290580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.290840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.290878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.291229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.291261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.291490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.291519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.291884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.291916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.292263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.292292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.292633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.292673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.293082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.293112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.293451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.293481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.293699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.293729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.294079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.294110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.294310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.294339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.294643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.294890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.294926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.295261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.295291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.295675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.295708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.296066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.296095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.296454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.296484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.296852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.296884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.297242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.297271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.297622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.297659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.297990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.298021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.298374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.298404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.298749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.298781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.469 [2024-12-06 17:47:53.299127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.469 [2024-12-06 17:47:53.299157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.469 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.299489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.299519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.299875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.299905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.300127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.300167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.300492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.300521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.300874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.300906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.301246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.301275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.301492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.301521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.301740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.301771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.302116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.302147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.302493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.302522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.302880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.302911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.303240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.303270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.303618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.303668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.303970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.304000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.304350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.304380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.304729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.304767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.304959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.304988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.305309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.305339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.305700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.305731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.306108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.306441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.306472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.306831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.306861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.307207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.307237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.307582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.307611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.307965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.307996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.308339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.308369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.308589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.308619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.308963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.308994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.309356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.470 [2024-12-06 17:47:53.309386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.470 qpair failed and we were unable to recover it. 00:32:01.470 [2024-12-06 17:47:53.309601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.309632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.309901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.309932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.310247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.310277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.310657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.310689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.310896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.310925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.311282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.311312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.311678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.311710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.312065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.312524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.312555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.312731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.312761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.313103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.313133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.313329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.313359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.313591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.313621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.313830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.313861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.314300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.314331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.314650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.314681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.314939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.314969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.315310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.315339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.315710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.315741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.315945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.315974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.316288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.316317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.316655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.316685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.317025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.317055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.317410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.317439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.317816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.318162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.318192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.318539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.318569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.318962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.319306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.471 [2024-12-06 17:47:53.319336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.471 qpair failed and we were unable to recover it. 00:32:01.471 [2024-12-06 17:47:53.319677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.319709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.320055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.320086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.320430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.320460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.320874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.320905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.321257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.321648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.321678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.322042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.322072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.322403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.322434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.322787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.322817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.323036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.323065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.323479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.323508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.323869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.323901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.324246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.324275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.324407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.324435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.324652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.324683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.325053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.325082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.325423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.325454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.325811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.325842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.326037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.326066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.326421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.326451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.326811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.326843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.327201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.327231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.327439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.327469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.327664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.327695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.328058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.328088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.328423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.328459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.328784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.328815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.329161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.329192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.329545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.329575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.472 [2024-12-06 17:47:53.330017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.472 [2024-12-06 17:47:53.330048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.472 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.330412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.330441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.330793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.330823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.331164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.331194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.331541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.331570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.331766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.331796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.332161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.332191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.332386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.332415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.332627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.332664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.333004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.333034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.333276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.333305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.333678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.333708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.334046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.334077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.334271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.334301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.334659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.334689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.335033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.335063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.335305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.335334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.335675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.335707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.336030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.336060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.336399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.336429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.336636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.336673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.337021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.337051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.337378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.337408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.337631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.337674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.473 [2024-12-06 17:47:53.338041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.473 [2024-12-06 17:47:53.338071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.473 qpair failed and we were unable to recover it. 00:32:01.474 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.474 [2024-12-06 17:47:53.338291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.338320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:01.474 [2024-12-06 17:47:53.338658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.338689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:01.474 [2024-12-06 17:47:53.339038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.339069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.474 [2024-12-06 17:47:53.339421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.339451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.339808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.339840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.340182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.340217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.340533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.340562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.340832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.340865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.341081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.341109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.341445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.341474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.341863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.341893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.342093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.342122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.342359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.342388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.342621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.342661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.343019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.343348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.343378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.343703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.343735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.344018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.344047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.344380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.344410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.344739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.344772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.345126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.345155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.345379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.345408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.345757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.345787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.346109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.346146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.346455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.346487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.346740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.346771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.346992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.474 [2024-12-06 17:47:53.347021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.474 qpair failed and we were unable to recover it. 00:32:01.474 [2024-12-06 17:47:53.347213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.347242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.347573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.347602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.348017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.348048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.348270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.348299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.348600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.348629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.348953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.348983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.349260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.349288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.349518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.349547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.349857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.349890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.350213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.350243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.350446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.350475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.350801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.350830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.351058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.351383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.351412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.351761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.351792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.351986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.352015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.352205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.352239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.352630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.352670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.352899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.352928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.353150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.353178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.353507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.353536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.353868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.353899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.354098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.354127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.354436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.354693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.354722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.355074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.355104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.355433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.355462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.355832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.355862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.475 [2024-12-06 17:47:53.356163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.475 [2024-12-06 17:47:53.356192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.475 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.356531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.356559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.356905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.356936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.357157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.357186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.357498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.357529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.357876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.357908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.358161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.358492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.358520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.358884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.358915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.359257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.359287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.359632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.359695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.360010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.360041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.360395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.360423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.360622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.360661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.361062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.361091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.361485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.361515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.361894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.361926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.362267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.362296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.362654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.362684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.362911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.362939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.363234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.363263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.363585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.363613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.363986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.364017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.364213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.364242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.364580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.364609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.364961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.364991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.365331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.365361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.365663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.365694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.366044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.366073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.366424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.366455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.476 qpair failed and we were unable to recover it. 00:32:01.476 [2024-12-06 17:47:53.366760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.476 [2024-12-06 17:47:53.366791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.367153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.367182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.367377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.367405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.367603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.367633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.368003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.368031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.368318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.368347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.368675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.368712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.369152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.369182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.369474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.369505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.369733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.369765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.370124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.370374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.370402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.370747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.370777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.371001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.371033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.371372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.371401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.371614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.371651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.371810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.371838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.372174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.372204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.372438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.372467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.372809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.372839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.373156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.373187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.373497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.373527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.373883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.373913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.374137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.374166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.374467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.374496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.374701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.374731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.374977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.375004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.375344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.375691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.375721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.376075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.477 [2024-12-06 17:47:53.376104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.477 qpair failed and we were unable to recover it. 00:32:01.477 [2024-12-06 17:47:53.376415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.376456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.478 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:01.478 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.478 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.478 [2024-12-06 17:47:53.376770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.376811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.377241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.377269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.377616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.377665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.377983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.378013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.378384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.378413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.378741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.378771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.378993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.379023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.379365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.379395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.379749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.379779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.380118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.380148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.380473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.380504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.380863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.380893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.381237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.381267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.381573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.381601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.381958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.381989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.382334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.382363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.382711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.382749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.383118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.383148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.383373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.383401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.383657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.383687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.384037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.384077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.384423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.384452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.384807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.384835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.385196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.385225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.385560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.385589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.385981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.386011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.478 [2024-12-06 17:47:53.386320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.478 [2024-12-06 17:47:53.386357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.478 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.386547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.386581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.386960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.386991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.387352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.387381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.387710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.387740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.388101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.388129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.388468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.388497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.388852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.388882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.389209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.389238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.389544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.389574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.389907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.389936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.390270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.390300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.390496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.390524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.390873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.390903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.391249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.391277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.391628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.391668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.391885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.391914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.392273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.392303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.392519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.392547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.392897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.392927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.393241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.393273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.393691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.393722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.393952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.393980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.394327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.394356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.394575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.394604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.394967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.394997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.479 [2024-12-06 17:47:53.395311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.479 [2024-12-06 17:47:53.395341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.479 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.395532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.395561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.395916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.395946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.396307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.396343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.396727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.396757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.397084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.397115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.397472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.397501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.397872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.397904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.398249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.398279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.398629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.398666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.399018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.399047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.399239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.399269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.399483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.399511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.399831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.399861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.400203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.400587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.400615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.400834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.400865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.401062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.401090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.401427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.401456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.401792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.401823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.402175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.402205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.402501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.402531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.402890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.402922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.403273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.403302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.403612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.403650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.404005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.404035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.404413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.404442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.404786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.404824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.405068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.405097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.405391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.405422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.480 qpair failed and we were unable to recover it. 00:32:01.480 [2024-12-06 17:47:53.405758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.480 [2024-12-06 17:47:53.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.406150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.406180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.406532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.406561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.406906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.406936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.407134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.407163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.407525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.407554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.407765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.407795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.408020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.408048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.408354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.408384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.408725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.408754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.409070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.409099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.409324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.409353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.409694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.409725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.410040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.410080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 Malloc0 00:32:01.481 [2024-12-06 17:47:53.410435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.410465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.410658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.410687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.411006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.411036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.481 [2024-12-06 17:47:53.411382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:01.481 [2024-12-06 17:47:53.411749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.481 [2024-12-06 17:47:53.411779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.481 [2024-12-06 17:47:53.412151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.412416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.412445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.412781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.412811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.413070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.413099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.413389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.413418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.413676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.413706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.414064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.414095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.414345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.414377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.414754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.414785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.415143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.481 [2024-12-06 17:47:53.415172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.481 qpair failed and we were unable to recover it. 00:32:01.481 [2024-12-06 17:47:53.415504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.415533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.415868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.415898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.416242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.416271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.416620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.416656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.416878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.416907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.417243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.417271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.417475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.417505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.417802] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.482 [2024-12-06 17:47:53.417841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.417871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.418256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.418285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.418621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.418665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.418869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.418898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.419257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.419608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.419645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.419965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.419995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.420334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.420366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.420706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.420736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.421109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.421139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.421489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.421519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.421737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.421768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.422115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.422144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.422346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.422375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.422715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.422745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.423076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.423106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 [2024-12-06 17:47:53.423468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.423497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.482 [2024-12-06 17:47:53.423887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.423918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:01.482 [2024-12-06 17:47:53.424139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.424167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.482 [2024-12-06 17:47:53.424390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.482 [2024-12-06 17:47:53.424419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.482 qpair failed and we were unable to recover it. 00:32:01.482 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.482 [2024-12-06 17:47:53.424700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.424730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.424845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.424876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.425103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.425133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.425486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.425515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.425882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.425912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.426263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.426293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.426626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.426665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.426887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.426922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.427256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.427286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.427645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.427676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.427900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.427929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.428264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.428294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.428520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.428550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.428885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.428915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.429335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.429365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.429709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.429741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.430120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.430151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.430481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.430511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.430727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.430758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.431117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.431147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.431491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.431521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.431877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.431909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.432259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.432289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.432634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.433029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.433059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.433389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.433419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.433752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.433783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.434134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.434165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.434548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.434925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.434959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.483 [2024-12-06 17:47:53.435306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.483 [2024-12-06 17:47:53.435336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.483 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.435677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.435708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 wit 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.484 h addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.435912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.435942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.484 [2024-12-06 17:47:53.436319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.436349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.484 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.484 [2024-12-06 17:47:53.436683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.436715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.436911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.436940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.437367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.437398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.437757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.437788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.438147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.438177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.438540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.438570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.438929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.438960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.439328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.439358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.439699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.439728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.440069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.440099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.440436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.440465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.440812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.440843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.441137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.441173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.441512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.441542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.441753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.441783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.442130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.442160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.442382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.442411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.442750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.442780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.443126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.443155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.443504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.443535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.443741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.443771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.444114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.444144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.444440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.444471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.484 qpair failed and we were unable to recover it. 00:32:01.484 [2024-12-06 17:47:53.444828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.484 [2024-12-06 17:47:53.444859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.445116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.445145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.445438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.445468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.445841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.445872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.446209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.446238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.446585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.446617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.446855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.446885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.447189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.447220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.447418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.447448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.485 [2024-12-06 17:47:53.447788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.447819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.485 [2024-12-06 17:47:53.448170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.448201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.485 [2024-12-06 17:47:53.448539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.448569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.485 [2024-12-06 17:47:53.448953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.448983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.449336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.449366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.449727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.449763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.450127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.450157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.450513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.450543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.450769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.450799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.451143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.451173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.451527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.451558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.451920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.451952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.452272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.452302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.452549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.485 [2024-12-06 17:47:53.452580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.485 qpair failed and we were unable to recover it. 00:32:01.485 [2024-12-06 17:47:53.452916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.452948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.453304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.453334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.453543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.453573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.453934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.453966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.454175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.454205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.454442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.454472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.454827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:01.486 [2024-12-06 17:47:53.454858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23af0c0 with addr=10.0.0.2, port=4420 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.454977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.486 [2024-12-06 17:47:53.458547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.486 [2024-12-06 17:47:53.458675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.486 [2024-12-06 17:47:53.458720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.486 [2024-12-06 17:47:53.458743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.486 [2024-12-06 17:47:53.458763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.486 [2024-12-06 17:47:53.458815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.486 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:01.486 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.486 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.486 [2024-12-06 17:47:53.468487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.486 [2024-12-06 17:47:53.468576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.486 [2024-12-06 17:47:53.468617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.486 [2024-12-06 17:47:53.468648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.486 [2024-12-06 17:47:53.468669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.486 [2024-12-06 17:47:53.468709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.486 17:47:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1735099 00:32:01.486 [2024-12-06 17:47:53.478385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.486 [2024-12-06 17:47:53.478452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.486 [2024-12-06 17:47:53.478478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.486 [2024-12-06 17:47:53.478492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.486 [2024-12-06 17:47:53.478506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.486 [2024-12-06 17:47:53.478539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.488489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.486 [2024-12-06 17:47:53.488558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.486 [2024-12-06 17:47:53.488577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.486 [2024-12-06 17:47:53.488588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.486 [2024-12-06 17:47:53.488597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.486 [2024-12-06 17:47:53.488616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.498449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.486 [2024-12-06 17:47:53.498505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.486 [2024-12-06 17:47:53.498518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.486 [2024-12-06 17:47:53.498526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.486 [2024-12-06 17:47:53.498533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.486 [2024-12-06 17:47:53.498546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.486 [2024-12-06 17:47:53.508458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.486 [2024-12-06 17:47:53.508504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.486 [2024-12-06 17:47:53.508519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.486 [2024-12-06 17:47:53.508526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.486 [2024-12-06 17:47:53.508533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.486 [2024-12-06 17:47:53.508547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.486 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.518475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.785 [2024-12-06 17:47:53.518521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.785 [2024-12-06 17:47:53.518535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.785 [2024-12-06 17:47:53.518543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.785 [2024-12-06 17:47:53.518549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.785 [2024-12-06 17:47:53.518562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.785 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.528523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.785 [2024-12-06 17:47:53.528577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.785 [2024-12-06 17:47:53.528591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.785 [2024-12-06 17:47:53.528599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.785 [2024-12-06 17:47:53.528605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.785 [2024-12-06 17:47:53.528619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.785 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.538618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.785 [2024-12-06 17:47:53.538699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.785 [2024-12-06 17:47:53.538714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.785 [2024-12-06 17:47:53.538721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.785 [2024-12-06 17:47:53.538728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.785 [2024-12-06 17:47:53.538742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.785 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.548557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.785 [2024-12-06 17:47:53.548608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.785 [2024-12-06 17:47:53.548621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.785 [2024-12-06 17:47:53.548629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.785 [2024-12-06 17:47:53.548635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.785 [2024-12-06 17:47:53.548656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.785 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.558583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.785 [2024-12-06 17:47:53.558632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.785 [2024-12-06 17:47:53.558650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.785 [2024-12-06 17:47:53.558657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.785 [2024-12-06 17:47:53.558664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.785 [2024-12-06 17:47:53.558678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.785 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.568615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.785 [2024-12-06 17:47:53.568702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.785 [2024-12-06 17:47:53.568720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.785 [2024-12-06 17:47:53.568729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.785 [2024-12-06 17:47:53.568735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.785 [2024-12-06 17:47:53.568750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.785 qpair failed and we were unable to recover it. 00:32:01.785 [2024-12-06 17:47:53.578678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.578744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.578760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.578768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.578776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.578793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.588650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.588707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.588721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.588728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.588735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.588749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.598554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.598604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.598618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.598625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.598632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.598649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.608743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.608798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.608812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.608819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.608829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.608843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.618754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.618813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.618827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.618834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.618840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.618854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.628752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.628801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.628815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.628822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.628829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.628843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.638763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.638812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.638826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.638833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.638839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.638853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.648821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.648887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.648900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.648908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.648914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.648928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.658841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.658897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.658910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.658918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.658925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.658938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.668734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.668780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.668794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.668801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.668808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.668821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.678901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.678947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.678961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.678968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.678975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.678988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.689077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.689151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.689165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.689172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.689179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.689192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.699048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.786 [2024-12-06 17:47:53.699101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.786 [2024-12-06 17:47:53.699118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.786 [2024-12-06 17:47:53.699126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.786 [2024-12-06 17:47:53.699132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.786 [2024-12-06 17:47:53.699146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.786 qpair failed and we were unable to recover it. 00:32:01.786 [2024-12-06 17:47:53.709018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.709065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.709080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.709087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.709094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.709108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.718925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.718984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.718997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.719005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.719011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.719025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.729078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.729134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.729148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.729155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.729162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.729175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.739081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.739151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.739164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.739172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.739182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.739196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.749065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.749111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.749125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.749132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.749139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.749153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.759122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.759202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.759215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.759223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.759230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.759243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.769187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.769241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.769254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.769261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.769268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.769281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.779214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.779276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.779289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.779296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.779303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.779317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.789196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.789245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.789258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.789265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.789272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.789285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.799217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.799269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.799284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.799292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.799298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.799315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.809287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.809342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.809356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.809363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.809370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.809384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.819308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.819365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.819383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.819390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.819397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.819412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.829301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.829348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.829365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.787 [2024-12-06 17:47:53.829372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.787 [2024-12-06 17:47:53.829379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.787 [2024-12-06 17:47:53.829393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.787 qpair failed and we were unable to recover it. 00:32:01.787 [2024-12-06 17:47:53.839312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:01.787 [2024-12-06 17:47:53.839358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:01.787 [2024-12-06 17:47:53.839371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:01.788 [2024-12-06 17:47:53.839378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:01.788 [2024-12-06 17:47:53.839385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:01.788 [2024-12-06 17:47:53.839399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:01.788 qpair failed and we were unable to recover it. 00:32:02.062 [2024-12-06 17:47:53.849415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.062 [2024-12-06 17:47:53.849472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.062 [2024-12-06 17:47:53.849485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.062 [2024-12-06 17:47:53.849492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.062 [2024-12-06 17:47:53.849499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.062 [2024-12-06 17:47:53.849512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.062 qpair failed and we were unable to recover it. 00:32:02.062 [2024-12-06 17:47:53.859436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.062 [2024-12-06 17:47:53.859486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.062 [2024-12-06 17:47:53.859500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.062 [2024-12-06 17:47:53.859507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.062 [2024-12-06 17:47:53.859514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.062 [2024-12-06 17:47:53.859528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.062 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.869423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.869468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.869482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.869489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.869499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.869514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.879436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.879485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.879499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.879507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.879513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.879527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.889511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.889567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.889581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.889588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.889595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.889608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.899550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.899612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.899625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.899632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.899643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.899657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.909494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.909540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.909553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.909561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.909567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.909581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.919555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.919605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.919620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.919627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.919634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.919652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.929631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.929690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.929704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.929711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.929717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.929731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.939650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.939711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.939725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.939732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.939739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.939752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.949630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.949685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.949698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.949705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.949712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.949725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.959665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.959717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.959734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.959741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.959748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.959762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.969607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.969669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.969683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.969690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.969696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.969710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.979774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.979831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.979844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.979851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.979858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.979871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.063 qpair failed and we were unable to recover it. 00:32:02.063 [2024-12-06 17:47:53.989730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.063 [2024-12-06 17:47:53.989779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.063 [2024-12-06 17:47:53.989791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.063 [2024-12-06 17:47:53.989799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.063 [2024-12-06 17:47:53.989805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.063 [2024-12-06 17:47:53.989819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:53.999772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:53.999828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:53.999841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:53.999848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:53.999859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:53.999873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.009867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.009919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.009933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.009940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.009947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.009960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.019864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.019921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.019934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.019942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.019948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.019962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.029873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.029925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.029939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.029946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.029952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.029966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.039870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.039916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.039930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.039937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.039943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.039957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.049956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.050011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.050025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.050032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.050038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.050052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.059985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.060048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.060063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.060070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.060077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.060095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.069847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.069898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.069912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.069919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.069926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.069940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.079861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.079948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.079961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.079970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.079976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.079990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.090050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.090102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.090119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.090127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.090133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.090146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.100089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.100145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.100158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.100166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.100172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.100186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.110065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.110117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.110131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.110138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.110144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.110158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.064 qpair failed and we were unable to recover it. 00:32:02.064 [2024-12-06 17:47:54.120122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.064 [2024-12-06 17:47:54.120202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.064 [2024-12-06 17:47:54.120218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.064 [2024-12-06 17:47:54.120227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.064 [2024-12-06 17:47:54.120234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.064 [2024-12-06 17:47:54.120249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.065 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.130154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.130210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.130224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.130232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.130242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.130257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.140185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.140261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.140275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.140282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.140289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.140304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.150155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.150205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.150221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.150228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.150235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.150253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.160221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.160271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.160285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.160293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.160300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.160314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.170164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.170239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.170252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.170260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.170266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.170281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.180314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.180372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.180386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.180393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.180400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.180413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.190290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.190339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.190352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.190360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.190367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.190380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.200324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.200376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.200389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.200397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.200404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.200417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.210406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.210466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.326 [2024-12-06 17:47:54.210480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.326 [2024-12-06 17:47:54.210487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.326 [2024-12-06 17:47:54.210493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.326 [2024-12-06 17:47:54.210506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.326 qpair failed and we were unable to recover it. 00:32:02.326 [2024-12-06 17:47:54.220447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.326 [2024-12-06 17:47:54.220510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.220540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.220549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.220557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.220577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.230382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.230431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.230447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.230455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.230462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.230477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.240441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.240492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.240506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.240513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.240520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.240534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.250506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.250562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.250576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.250584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.250590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.250605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.260554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.260611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.260625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.260633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.260649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.260663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.270491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.270541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.270555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.270562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.270568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.270582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.280570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.280622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.280640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.280648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.280654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.280668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.290622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.290685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.290699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.290706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.290713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.290726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.300528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.300580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.300594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.300601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.300607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.300621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.310615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.310674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.310689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.310696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.310703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.310717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.320633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.320757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.320771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.320778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.320785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.320799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.330716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.330771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.330785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.330792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.330799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.330812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.340736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.340792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.327 [2024-12-06 17:47:54.340807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.327 [2024-12-06 17:47:54.340814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.327 [2024-12-06 17:47:54.340821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.327 [2024-12-06 17:47:54.340836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.327 qpair failed and we were unable to recover it. 00:32:02.327 [2024-12-06 17:47:54.350709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.327 [2024-12-06 17:47:54.350753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.328 [2024-12-06 17:47:54.350770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.328 [2024-12-06 17:47:54.350778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.328 [2024-12-06 17:47:54.350784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.328 [2024-12-06 17:47:54.350799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.328 qpair failed and we were unable to recover it. 00:32:02.328 [2024-12-06 17:47:54.360722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.328 [2024-12-06 17:47:54.360769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.328 [2024-12-06 17:47:54.360783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.328 [2024-12-06 17:47:54.360791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.328 [2024-12-06 17:47:54.360797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.328 [2024-12-06 17:47:54.360811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.328 qpair failed and we were unable to recover it. 00:32:02.328 [2024-12-06 17:47:54.370809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.328 [2024-12-06 17:47:54.370866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.328 [2024-12-06 17:47:54.370880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.328 [2024-12-06 17:47:54.370887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.328 [2024-12-06 17:47:54.370894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.328 [2024-12-06 17:47:54.370907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.328 qpair failed and we were unable to recover it. 00:32:02.328 [2024-12-06 17:47:54.380827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.328 [2024-12-06 17:47:54.380885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.328 [2024-12-06 17:47:54.380898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.328 [2024-12-06 17:47:54.380905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.328 [2024-12-06 17:47:54.380912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.328 [2024-12-06 17:47:54.380925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.328 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.390807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.390852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.390866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.390873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.390883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.390897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.400870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.400918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.400931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.400939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.400945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.400959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.410922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.410978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.410992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.410999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.411005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.411019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.420966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.421067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.421080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.421088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.421094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.421108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.430933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.430980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.430998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.431005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.431012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.431028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.440969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.441016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.441032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.441039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.441046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.441061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.451044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.451099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.451116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.451124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.451130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.589 [2024-12-06 17:47:54.451146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.589 qpair failed and we were unable to recover it. 00:32:02.589 [2024-12-06 17:47:54.461068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.589 [2024-12-06 17:47:54.461124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.589 [2024-12-06 17:47:54.461140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.589 [2024-12-06 17:47:54.461147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.589 [2024-12-06 17:47:54.461154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.461168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.470977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.471074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.471090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.471097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.471103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.471118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.481003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.481055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.481072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.481079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.481086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.481100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.491163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.491215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.491228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.491235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.491242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.491256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.501181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.501235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.501249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.501256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.501262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.501276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.511162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.511208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.511221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.511228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.511235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.511249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.521169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.521220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.521233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.521241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.521250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.521264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.531158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.531252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.531266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.531275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.531281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.531294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.541301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.541355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.541370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.541377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.541384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.541398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.551266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.551320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.551345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.551354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.551361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.551380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.561287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.561342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.561367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.561376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.561383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.561402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.571363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.571421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.571437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.571444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.571451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.571465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.581382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.581439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.581453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.581460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.590 [2024-12-06 17:47:54.581466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.590 [2024-12-06 17:47:54.581481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.590 qpair failed and we were unable to recover it. 00:32:02.590 [2024-12-06 17:47:54.591382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.590 [2024-12-06 17:47:54.591433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.590 [2024-12-06 17:47:54.591447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.590 [2024-12-06 17:47:54.591454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.591461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.591475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.591 [2024-12-06 17:47:54.601399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.591 [2024-12-06 17:47:54.601453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.591 [2024-12-06 17:47:54.601468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.591 [2024-12-06 17:47:54.601475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.601482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.601495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.591 [2024-12-06 17:47:54.611478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.591 [2024-12-06 17:47:54.611533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.591 [2024-12-06 17:47:54.611550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.591 [2024-12-06 17:47:54.611557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.611564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.611578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.591 [2024-12-06 17:47:54.621500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.591 [2024-12-06 17:47:54.621555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.591 [2024-12-06 17:47:54.621568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.591 [2024-12-06 17:47:54.621576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.621582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.621596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.591 [2024-12-06 17:47:54.631460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.591 [2024-12-06 17:47:54.631509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.591 [2024-12-06 17:47:54.631522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.591 [2024-12-06 17:47:54.631530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.631536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.631551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.591 [2024-12-06 17:47:54.641517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.591 [2024-12-06 17:47:54.641566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.591 [2024-12-06 17:47:54.641579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.591 [2024-12-06 17:47:54.641586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.641593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.641606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.591 [2024-12-06 17:47:54.651601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.591 [2024-12-06 17:47:54.651657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.591 [2024-12-06 17:47:54.651671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.591 [2024-12-06 17:47:54.651678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.591 [2024-12-06 17:47:54.651688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.591 [2024-12-06 17:47:54.651703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.591 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.661502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.661558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.661572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.661579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.661586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.661600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.671589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.671641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.671655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.671663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.671670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.671684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.681633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.681690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.681704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.681711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.681718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.681732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.691665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.691721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.691735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.691742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.691749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.691763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.701722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.701782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.701795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.701803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.701809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.701823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.711698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.711762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.711776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.711783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.711789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.711803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.721712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.721758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.721772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.721779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.721786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.721800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.731814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.853 [2024-12-06 17:47:54.731870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.853 [2024-12-06 17:47:54.731884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.853 [2024-12-06 17:47:54.731891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.853 [2024-12-06 17:47:54.731898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.853 [2024-12-06 17:47:54.731911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.853 qpair failed and we were unable to recover it. 00:32:02.853 [2024-12-06 17:47:54.741883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.741964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.741981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.741988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.741995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.742009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.751711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.751760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.751775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.751782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.751789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.751803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.761864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.761910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.761924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.761934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.761940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.761954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.771897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.771979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.771993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.772000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.772006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.772019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.781916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.781968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.781981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.781988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.781998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.782012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.791922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.791972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.791986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.791994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.792000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.792013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.801940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.801985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.801998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.802006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.802013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.802026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.811993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.812050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.812063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.812071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.812077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.812091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.822103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.822191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.822204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.822212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.822218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.822232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.832022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.832071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.832084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.832091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.854 [2024-12-06 17:47:54.832098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.854 [2024-12-06 17:47:54.832112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.854 qpair failed and we were unable to recover it. 00:32:02.854 [2024-12-06 17:47:54.842030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.854 [2024-12-06 17:47:54.842079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.854 [2024-12-06 17:47:54.842092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.854 [2024-12-06 17:47:54.842099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.842106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.842120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.852060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.852116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.852129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.852136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.852143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.852156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.862047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.862107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.862120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.862127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.862134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.862148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.872152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.872203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.872220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.872227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.872234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.872248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.882160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.882208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.882221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.882228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.882235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.882248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.892241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.892315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.892328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.892335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.892341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.892355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.902228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.902282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.902296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.902303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.902309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.902323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:02.855 [2024-12-06 17:47:54.912243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:02.855 [2024-12-06 17:47:54.912300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:02.855 [2024-12-06 17:47:54.912314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:02.855 [2024-12-06 17:47:54.912321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.855 [2024-12-06 17:47:54.912331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:02.855 [2024-12-06 17:47:54.912345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:02.855 qpair failed and we were unable to recover it. 00:32:03.117 [2024-12-06 17:47:54.922271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.117 [2024-12-06 17:47:54.922325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.117 [2024-12-06 17:47:54.922338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.117 [2024-12-06 17:47:54.922345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.117 [2024-12-06 17:47:54.922351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.117 [2024-12-06 17:47:54.922365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.117 qpair failed and we were unable to recover it. 00:32:03.117 [2024-12-06 17:47:54.932339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.117 [2024-12-06 17:47:54.932393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.117 [2024-12-06 17:47:54.932407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.117 [2024-12-06 17:47:54.932414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.117 [2024-12-06 17:47:54.932421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.117 [2024-12-06 17:47:54.932434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.117 qpair failed and we were unable to recover it. 00:32:03.117 [2024-12-06 17:47:54.942363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.117 [2024-12-06 17:47:54.942421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.117 [2024-12-06 17:47:54.942446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.117 [2024-12-06 17:47:54.942455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:54.942462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:54.942481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:54.952341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:54.952391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:54.952406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:54.952413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:54.952420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:54.952434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:54.962359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:54.962410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:54.962436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:54.962445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:54.962452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:54.962471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:54.972429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:54.972493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:54.972518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:54.972526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:54.972533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:54.972552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:54.982439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:54.982494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:54.982509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:54.982517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:54.982523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:54.982539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:54.992448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:54.992498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:54.992512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:54.992520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:54.992526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:54.992541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:55.002483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:55.002529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:55.002547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:55.002554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:55.002561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:55.002575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:55.012542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:55.012602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:55.012616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:55.012623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:55.012630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:55.012647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:55.022568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:55.022619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:55.022632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:55.022643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:55.022650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:55.022663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:55.032552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.118 [2024-12-06 17:47:55.032599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.118 [2024-12-06 17:47:55.032613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.118 [2024-12-06 17:47:55.032620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.118 [2024-12-06 17:47:55.032626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.118 [2024-12-06 17:47:55.032643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.118 qpair failed and we were unable to recover it. 00:32:03.118 [2024-12-06 17:47:55.042589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.042645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.042659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.042666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.042676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.042690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.052675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.052744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.052758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.052766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.052773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.052786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.062681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.062739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.062753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.062760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.062767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.062781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.072664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.072716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.072730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.072737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.072744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.072757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.082682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.082733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.082747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.082754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.082760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.082775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.092766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.092856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.092869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.092877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.092883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.092897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.102806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.102881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.102895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.102902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.102908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.102922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.112783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.112831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.112845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.112853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.112859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.112873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.122825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.122910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.122925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.122932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.122939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.122953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.132894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.132945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.132963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.132970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.119 [2024-12-06 17:47:55.132976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.119 [2024-12-06 17:47:55.132991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.119 qpair failed and we were unable to recover it. 00:32:03.119 [2024-12-06 17:47:55.142880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.119 [2024-12-06 17:47:55.142939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.119 [2024-12-06 17:47:55.142952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.119 [2024-12-06 17:47:55.142960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.120 [2024-12-06 17:47:55.142966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.120 [2024-12-06 17:47:55.142980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.120 qpair failed and we were unable to recover it. 00:32:03.120 [2024-12-06 17:47:55.152881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.120 [2024-12-06 17:47:55.152930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.120 [2024-12-06 17:47:55.152945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.120 [2024-12-06 17:47:55.152952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.120 [2024-12-06 17:47:55.152959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.120 [2024-12-06 17:47:55.152974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.120 qpair failed and we were unable to recover it. 00:32:03.120 [2024-12-06 17:47:55.162887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.120 [2024-12-06 17:47:55.162939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.120 [2024-12-06 17:47:55.162953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.120 [2024-12-06 17:47:55.162961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.120 [2024-12-06 17:47:55.162967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.120 [2024-12-06 17:47:55.162980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.120 qpair failed and we were unable to recover it. 00:32:03.120 [2024-12-06 17:47:55.172994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.120 [2024-12-06 17:47:55.173046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.120 [2024-12-06 17:47:55.173060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.120 [2024-12-06 17:47:55.173067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.120 [2024-12-06 17:47:55.173077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.120 [2024-12-06 17:47:55.173091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.120 qpair failed and we were unable to recover it. 00:32:03.381 [2024-12-06 17:47:55.183036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.381 [2024-12-06 17:47:55.183089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.381 [2024-12-06 17:47:55.183102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.381 [2024-12-06 17:47:55.183109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.381 [2024-12-06 17:47:55.183115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.381 [2024-12-06 17:47:55.183129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.381 qpair failed and we were unable to recover it. 00:32:03.381 [2024-12-06 17:47:55.192985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.381 [2024-12-06 17:47:55.193031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.381 [2024-12-06 17:47:55.193044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.381 [2024-12-06 17:47:55.193051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.381 [2024-12-06 17:47:55.193058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.381 [2024-12-06 17:47:55.193071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.381 qpair failed and we were unable to recover it. 00:32:03.381 [2024-12-06 17:47:55.203011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.381 [2024-12-06 17:47:55.203060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.203073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.203080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.203087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.203100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.213097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.213149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.213162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.213169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.213175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.213189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.223096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.223151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.223165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.223172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.223178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.223191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.233119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.233187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.233201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.233208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.233214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.233228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.243136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.243189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.243202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.243209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.243216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.243229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.253115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.253168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.253181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.253188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.253195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.253208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.263229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.263279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.263296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.263304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.263310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.263324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.273197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.273247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.273260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.273268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.273274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.273288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.283243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.283291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.283304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.283311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.283318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.283331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.293321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.293406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.293422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.293430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.293436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.293451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.303366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.303428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.303453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.303467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.303475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.303494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.313292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.313339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.313355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.313362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.313369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.382 [2024-12-06 17:47:55.313384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.382 qpair failed and we were unable to recover it. 00:32:03.382 [2024-12-06 17:47:55.323298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.382 [2024-12-06 17:47:55.323343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.382 [2024-12-06 17:47:55.323358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.382 [2024-12-06 17:47:55.323365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.382 [2024-12-06 17:47:55.323372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.323386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.333425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.333512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.333526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.333535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.333541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.333555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.343460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.343545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.343559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.343567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.343576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.343590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.353436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.353512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.353526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.353533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.353540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.353554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.363461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.363512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.363527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.363534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.363541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.363555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.373551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.373604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.373618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.373626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.373632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.373650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.383574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.383628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.383647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.383654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.383661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.383674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.393545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.393606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.393623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.393631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.393641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.393655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.403461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.403507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.403520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.403528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.403534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.403548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.413672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.413726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.413739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.413747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.413753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.413767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.423680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.423736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.423749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.423757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.423764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.423778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.433656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.433711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.433725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.433736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.433742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.433756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.383 [2024-12-06 17:47:55.443677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.383 [2024-12-06 17:47:55.443725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.383 [2024-12-06 17:47:55.443739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.383 [2024-12-06 17:47:55.443746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.383 [2024-12-06 17:47:55.443753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.383 [2024-12-06 17:47:55.443767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.383 qpair failed and we were unable to recover it. 00:32:03.645 [2024-12-06 17:47:55.453737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.645 [2024-12-06 17:47:55.453792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.645 [2024-12-06 17:47:55.453805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.645 [2024-12-06 17:47:55.453812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.645 [2024-12-06 17:47:55.453819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.645 [2024-12-06 17:47:55.453833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.645 qpair failed and we were unable to recover it. 00:32:03.645 [2024-12-06 17:47:55.463794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.645 [2024-12-06 17:47:55.463847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.645 [2024-12-06 17:47:55.463861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.645 [2024-12-06 17:47:55.463868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.645 [2024-12-06 17:47:55.463875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.645 [2024-12-06 17:47:55.463888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.645 qpair failed and we were unable to recover it. 00:32:03.645 [2024-12-06 17:47:55.473737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.645 [2024-12-06 17:47:55.473790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.645 [2024-12-06 17:47:55.473804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.645 [2024-12-06 17:47:55.473812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.645 [2024-12-06 17:47:55.473818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.645 [2024-12-06 17:47:55.473832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.645 qpair failed and we were unable to recover it. 00:32:03.645 [2024-12-06 17:47:55.483775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.645 [2024-12-06 17:47:55.483821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.645 [2024-12-06 17:47:55.483834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.645 [2024-12-06 17:47:55.483841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.645 [2024-12-06 17:47:55.483848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.483861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.493876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.493965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.493978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.493987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.493993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.494007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.503893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.503952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.503965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.503972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.503979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.503993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.513864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.513910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.513923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.513930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.513937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.513950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.523800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.523860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.523876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.523884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.523890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.523904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.533984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.534042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.534056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.534063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.534070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.534083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.543894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.543956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.543969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.543976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.543983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.543997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.553967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.554017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.554031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.554039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.554045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.554059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.564012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.564104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.564118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.564130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.564136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.564150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.573957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.574010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.574023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.574030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.574036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.574049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.584116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.584168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.584182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.584190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.584196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.584211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.594075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.594122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.594136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.594143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.594150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.594163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.604132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.604180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.604194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.604201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.604207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.646 [2024-12-06 17:47:55.604221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.646 qpair failed and we were unable to recover it. 00:32:03.646 [2024-12-06 17:47:55.614197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.646 [2024-12-06 17:47:55.614254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.646 [2024-12-06 17:47:55.614268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.646 [2024-12-06 17:47:55.614275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.646 [2024-12-06 17:47:55.614282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.614295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.624218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.624277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.624290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.624298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.624305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.624318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.634208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.634253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.634266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.634274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.634280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.634294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.644238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.644289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.644302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.644309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.644315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.644329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.654317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.654372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.654389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.654397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.654403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.654417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.664325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.664388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.664413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.664422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.664429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.664448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.674314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.674371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.674396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.674405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.674412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.674432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.684331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.684402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.684418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.684425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.684432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.684448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.694512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.694583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.694607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.694621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.694629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.694654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.647 [2024-12-06 17:47:55.704471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.647 [2024-12-06 17:47:55.704525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.647 [2024-12-06 17:47:55.704541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.647 [2024-12-06 17:47:55.704549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.647 [2024-12-06 17:47:55.704556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.647 [2024-12-06 17:47:55.704570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.647 qpair failed and we were unable to recover it. 00:32:03.908 [2024-12-06 17:47:55.714458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.908 [2024-12-06 17:47:55.714510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.908 [2024-12-06 17:47:55.714524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.908 [2024-12-06 17:47:55.714532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.908 [2024-12-06 17:47:55.714538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.908 [2024-12-06 17:47:55.714553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.908 qpair failed and we were unable to recover it. 00:32:03.908 [2024-12-06 17:47:55.724446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.908 [2024-12-06 17:47:55.724537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.908 [2024-12-06 17:47:55.724551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.908 [2024-12-06 17:47:55.724559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.908 [2024-12-06 17:47:55.724566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.908 [2024-12-06 17:47:55.724580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.908 qpair failed and we were unable to recover it. 00:32:03.908 [2024-12-06 17:47:55.734522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.908 [2024-12-06 17:47:55.734574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.908 [2024-12-06 17:47:55.734588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.908 [2024-12-06 17:47:55.734595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.908 [2024-12-06 17:47:55.734602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.908 [2024-12-06 17:47:55.734616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.908 qpair failed and we were unable to recover it. 00:32:03.908 [2024-12-06 17:47:55.744550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.908 [2024-12-06 17:47:55.744603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.908 [2024-12-06 17:47:55.744616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.908 [2024-12-06 17:47:55.744623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.908 [2024-12-06 17:47:55.744630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.908 [2024-12-06 17:47:55.744647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.908 qpair failed and we were unable to recover it. 00:32:03.908 [2024-12-06 17:47:55.754522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.908 [2024-12-06 17:47:55.754572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.908 [2024-12-06 17:47:55.754585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.908 [2024-12-06 17:47:55.754592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.908 [2024-12-06 17:47:55.754599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.908 [2024-12-06 17:47:55.754612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.764524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.764569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.764583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.764591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.764597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.764612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.774676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.774740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.774754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.774761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.774768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.774783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.784658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.784719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.784733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.784741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.784747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.784761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.794630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.794681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.794704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.794711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.794718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.794733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.804673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.804724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.804738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.804745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.804752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.804765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.814753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.814834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.814847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.814854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.814861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.814876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.824774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.824827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.824840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.824851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.824857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.824871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.834621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.834675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.834689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.834696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.834703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.834717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.844772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.844871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.844885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.844892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.844899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.844913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.854853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.854910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.854924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.854931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.854938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.854952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.864872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.864935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.864949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.864956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.864963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.864976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.874815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.874863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.874877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.874884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.874890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.874904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.909 [2024-12-06 17:47:55.884869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.909 [2024-12-06 17:47:55.884919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.909 [2024-12-06 17:47:55.884933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.909 [2024-12-06 17:47:55.884940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.909 [2024-12-06 17:47:55.884947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.909 [2024-12-06 17:47:55.884961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.909 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.894958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.895010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.895024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.895032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.895038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.895052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.904877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.904934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.904948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.904955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.904961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.904975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.914999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.915052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.915065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.915072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.915079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.915092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.924990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.925050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.925064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.925071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.925077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.925091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.935075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.935148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.935161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.935168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.935174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.935188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.945148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.945236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.945250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.945259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.945265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.945278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.955072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.955123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.955136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.955147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.955153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.955167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:03.910 [2024-12-06 17:47:55.965101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:03.910 [2024-12-06 17:47:55.965149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:03.910 [2024-12-06 17:47:55.965163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:03.910 [2024-12-06 17:47:55.965170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:03.910 [2024-12-06 17:47:55.965176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:03.910 [2024-12-06 17:47:55.965190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:03.910 qpair failed and we were unable to recover it. 00:32:04.171 [2024-12-06 17:47:55.975159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.171 [2024-12-06 17:47:55.975214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.171 [2024-12-06 17:47:55.975227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.171 [2024-12-06 17:47:55.975235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.171 [2024-12-06 17:47:55.975241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.171 [2024-12-06 17:47:55.975254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.171 qpair failed and we were unable to recover it. 00:32:04.171 [2024-12-06 17:47:55.985179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.171 [2024-12-06 17:47:55.985259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.171 [2024-12-06 17:47:55.985273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.171 [2024-12-06 17:47:55.985280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.171 [2024-12-06 17:47:55.985286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.171 [2024-12-06 17:47:55.985300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.171 qpair failed and we were unable to recover it. 00:32:04.171 [2024-12-06 17:47:55.995167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.171 [2024-12-06 17:47:55.995221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.171 [2024-12-06 17:47:55.995234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.171 [2024-12-06 17:47:55.995241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.171 [2024-12-06 17:47:55.995248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.171 [2024-12-06 17:47:55.995266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.171 qpair failed and we were unable to recover it. 00:32:04.171 [2024-12-06 17:47:56.005210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.171 [2024-12-06 17:47:56.005257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.171 [2024-12-06 17:47:56.005272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.171 [2024-12-06 17:47:56.005279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.171 [2024-12-06 17:47:56.005286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.171 [2024-12-06 17:47:56.005300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.171 qpair failed and we were unable to recover it. 00:32:04.171 [2024-12-06 17:47:56.015257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.171 [2024-12-06 17:47:56.015315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.171 [2024-12-06 17:47:56.015329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.171 [2024-12-06 17:47:56.015336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.171 [2024-12-06 17:47:56.015343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.171 [2024-12-06 17:47:56.015356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.171 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.025326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.025376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.025389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.025397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.025403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.025417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.035287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.035355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.035381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.035390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.035399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.035419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.045303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.045355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.045380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.045389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.045396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.045415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.055390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.055453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.055478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.055487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.055494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.055513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.065431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.065486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.065502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.065509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.065516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.065532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.075413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.075462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.075477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.075485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.075492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.075506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.085420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.085479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.085492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.085504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.085511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.085525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.095503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.095585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.095599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.095606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.095614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.095628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.105536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.105584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.105597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.105605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.105612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.105626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.115500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.115549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.115563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.115570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.115577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.115591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.125532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.125582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.125598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.125606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.125613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.125631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.135612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.135675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.135689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.135696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.135703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.135717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.145523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.172 [2024-12-06 17:47:56.145593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.172 [2024-12-06 17:47:56.145607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.172 [2024-12-06 17:47:56.145614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.172 [2024-12-06 17:47:56.145620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.172 [2024-12-06 17:47:56.145634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.172 qpair failed and we were unable to recover it. 00:32:04.172 [2024-12-06 17:47:56.155629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.155680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.155697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.155704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.155710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.155726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.165697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.165810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.165825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.165833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.165839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.165853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.175723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.175787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.175801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.175808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.175815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.175828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.185738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.185796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.185810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.185817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.185824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.185837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.195700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.195747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.195761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.195768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.195775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.195789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.205771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.205819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.205834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.205842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.205849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.205863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.215852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.215903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.215917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.215928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.215934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.215948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.173 [2024-12-06 17:47:56.225876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.173 [2024-12-06 17:47:56.225953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.173 [2024-12-06 17:47:56.225966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.173 [2024-12-06 17:47:56.225973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.173 [2024-12-06 17:47:56.225980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.173 [2024-12-06 17:47:56.225994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.173 qpair failed and we were unable to recover it. 00:32:04.434 [2024-12-06 17:47:56.235851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.434 [2024-12-06 17:47:56.235904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.434 [2024-12-06 17:47:56.235918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.434 [2024-12-06 17:47:56.235925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.434 [2024-12-06 17:47:56.235931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.434 [2024-12-06 17:47:56.235946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.434 qpair failed and we were unable to recover it. 00:32:04.434 [2024-12-06 17:47:56.245843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.434 [2024-12-06 17:47:56.245888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.434 [2024-12-06 17:47:56.245901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.434 [2024-12-06 17:47:56.245909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.434 [2024-12-06 17:47:56.245915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.434 [2024-12-06 17:47:56.245928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.434 qpair failed and we were unable to recover it. 00:32:04.434 [2024-12-06 17:47:56.255933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.434 [2024-12-06 17:47:56.256016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.434 [2024-12-06 17:47:56.256029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.434 [2024-12-06 17:47:56.256037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.434 [2024-12-06 17:47:56.256044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.434 [2024-12-06 17:47:56.256062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.434 qpair failed and we were unable to recover it. 00:32:04.434 [2024-12-06 17:47:56.265956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.434 [2024-12-06 17:47:56.266011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.434 [2024-12-06 17:47:56.266025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.434 [2024-12-06 17:47:56.266032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.434 [2024-12-06 17:47:56.266039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.434 [2024-12-06 17:47:56.266053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.434 qpair failed and we were unable to recover it. 00:32:04.434 [2024-12-06 17:47:56.275940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.434 [2024-12-06 17:47:56.276008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.434 [2024-12-06 17:47:56.276022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.434 [2024-12-06 17:47:56.276029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.434 [2024-12-06 17:47:56.276036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.434 [2024-12-06 17:47:56.276051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.434 qpair failed and we were unable to recover it. 00:32:04.434 [2024-12-06 17:47:56.285966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.434 [2024-12-06 17:47:56.286060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.434 [2024-12-06 17:47:56.286073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.286082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.286088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.286102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.296039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.296091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.296105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.296112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.296118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.296132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.305979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.306039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.306052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.306060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.306066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.306080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.316078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.316126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.316140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.316147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.316153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.316167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.326047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.326093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.326106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.326113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.326120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.326133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.336152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.336207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.336220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.336228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.336234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.336248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.346192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.346288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.346302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.346313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.346320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.346333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.356175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.356224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.356237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.356244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.356250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.356263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.366114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.366207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.366222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.366229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.366236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.366250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.376222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.376275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.376289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.376296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.376302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.376316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.386194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.386249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.386262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.386270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.386276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.386294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.396264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.396314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.396328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.396335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.396341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.396355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.406294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.406350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.406363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.406370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.435 [2024-12-06 17:47:56.406377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.435 [2024-12-06 17:47:56.406390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.435 qpair failed and we were unable to recover it. 00:32:04.435 [2024-12-06 17:47:56.416367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.435 [2024-12-06 17:47:56.416428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.435 [2024-12-06 17:47:56.416453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.435 [2024-12-06 17:47:56.416462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.416469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.416490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.426293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.426354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.426379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.426388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.426395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.426415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.436368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.436424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.436440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.436447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.436454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.436469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.446410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.446460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.446474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.446482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.446488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.446502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.456487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.456540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.456554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.456562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.456568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.456582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.466389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.466459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.466473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.466480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.466487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.466500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.476486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.476535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.476549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.476561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.476567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.476581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.486521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.486571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.486585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.486592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.486599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.486613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.436 [2024-12-06 17:47:56.496602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.436 [2024-12-06 17:47:56.496658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.436 [2024-12-06 17:47:56.496672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.436 [2024-12-06 17:47:56.496679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.436 [2024-12-06 17:47:56.496685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.436 [2024-12-06 17:47:56.496699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.436 qpair failed and we were unable to recover it. 00:32:04.697 [2024-12-06 17:47:56.506600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.697 [2024-12-06 17:47:56.506656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.697 [2024-12-06 17:47:56.506670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.697 [2024-12-06 17:47:56.506677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.697 [2024-12-06 17:47:56.506684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.697 [2024-12-06 17:47:56.506698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.697 qpair failed and we were unable to recover it. 00:32:04.697 [2024-12-06 17:47:56.516608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.697 [2024-12-06 17:47:56.516658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.697 [2024-12-06 17:47:56.516671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.697 [2024-12-06 17:47:56.516678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.697 [2024-12-06 17:47:56.516685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.697 [2024-12-06 17:47:56.516707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.697 qpair failed and we were unable to recover it. 00:32:04.697 [2024-12-06 17:47:56.526652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.697 [2024-12-06 17:47:56.526744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.697 [2024-12-06 17:47:56.526758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.697 [2024-12-06 17:47:56.526765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.697 [2024-12-06 17:47:56.526772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.697 [2024-12-06 17:47:56.526785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.697 qpair failed and we were unable to recover it. 00:32:04.697 [2024-12-06 17:47:56.536720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.536775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.536788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.536796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.536802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.536816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.546753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.546812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.546825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.546832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.546839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.546853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.556611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.556674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.556688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.556695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.556702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.556716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.566727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.566820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.566834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.566841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.566847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.566862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.576864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.576918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.576932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.576940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.576946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.576960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.586844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.586897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.586911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.586918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.586925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.586938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.596823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.596877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.596890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.596897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.596903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.596917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.606819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.606867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.606881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.606892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.606898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.606912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.616936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.616989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.617003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.617010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.617017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.617030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.626962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.627014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.627028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.627035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.627042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.627056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.636935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.636984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.636998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.637005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.637012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.637026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.646949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.646997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.647010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.647017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.647024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.647042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.657024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.698 [2024-12-06 17:47:56.657077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.698 [2024-12-06 17:47:56.657091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.698 [2024-12-06 17:47:56.657098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.698 [2024-12-06 17:47:56.657104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.698 [2024-12-06 17:47:56.657118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.698 qpair failed and we were unable to recover it. 00:32:04.698 [2024-12-06 17:47:56.667077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.667131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.667145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.667153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.667159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.667173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.677017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.677063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.677077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.677085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.677091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.677104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.687055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.687104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.687118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.687125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.687131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.687145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.697095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.697180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.697194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.697201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.697209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.697223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.707132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.707185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.707199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.707206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.707213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.707226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.717144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.717194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.717209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.717217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.717223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.717241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.727159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.727222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.727237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.727244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.727250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.727264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.737247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.737346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.737360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.737372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.737378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.737392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.747264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.747317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.747331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.747338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.747345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.747358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.699 [2024-12-06 17:47:56.757256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.699 [2024-12-06 17:47:56.757304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.699 [2024-12-06 17:47:56.757318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.699 [2024-12-06 17:47:56.757325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.699 [2024-12-06 17:47:56.757332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.699 [2024-12-06 17:47:56.757345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.699 qpair failed and we were unable to recover it. 00:32:04.962 [2024-12-06 17:47:56.767271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.962 [2024-12-06 17:47:56.767323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.962 [2024-12-06 17:47:56.767338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.962 [2024-12-06 17:47:56.767346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.962 [2024-12-06 17:47:56.767352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.962 [2024-12-06 17:47:56.767366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.962 qpair failed and we were unable to recover it. 00:32:04.962 [2024-12-06 17:47:56.777351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.962 [2024-12-06 17:47:56.777403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.962 [2024-12-06 17:47:56.777416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.962 [2024-12-06 17:47:56.777424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.962 [2024-12-06 17:47:56.777430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.962 [2024-12-06 17:47:56.777448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.962 qpair failed and we were unable to recover it. 00:32:04.962 [2024-12-06 17:47:56.787380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.962 [2024-12-06 17:47:56.787433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.962 [2024-12-06 17:47:56.787447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.962 [2024-12-06 17:47:56.787454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.962 [2024-12-06 17:47:56.787461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.962 [2024-12-06 17:47:56.787474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.962 qpair failed and we were unable to recover it. 00:32:04.962 [2024-12-06 17:47:56.797347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.962 [2024-12-06 17:47:56.797397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.962 [2024-12-06 17:47:56.797411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.962 [2024-12-06 17:47:56.797418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.962 [2024-12-06 17:47:56.797425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.962 [2024-12-06 17:47:56.797438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.962 qpair failed and we were unable to recover it. 00:32:04.962 [2024-12-06 17:47:56.807266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.962 [2024-12-06 17:47:56.807318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.962 [2024-12-06 17:47:56.807331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.962 [2024-12-06 17:47:56.807338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.962 [2024-12-06 17:47:56.807345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.962 [2024-12-06 17:47:56.807358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.962 qpair failed and we were unable to recover it. 00:32:04.962 [2024-12-06 17:47:56.817459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.962 [2024-12-06 17:47:56.817517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.962 [2024-12-06 17:47:56.817532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.962 [2024-12-06 17:47:56.817539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.962 [2024-12-06 17:47:56.817546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.962 [2024-12-06 17:47:56.817562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.827482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.827541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.827557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.827564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.827571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.827585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.837462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.837519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.837533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.837541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.837548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.837562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.847474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.847523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.847537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.847544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.847551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.847565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.857584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.857649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.857664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.857671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.857678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.857692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.867611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.867668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.867682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.867694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.867701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.867715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.877543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.877591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.877605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.877612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.877619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.877632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.887603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.887656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.887670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.887677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.887684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.887698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.897682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.897741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.897754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.897762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.897769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.897783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.907723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.907782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.963 [2024-12-06 17:47:56.907796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.963 [2024-12-06 17:47:56.907803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.963 [2024-12-06 17:47:56.907809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.963 [2024-12-06 17:47:56.907827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.963 qpair failed and we were unable to recover it. 00:32:04.963 [2024-12-06 17:47:56.917705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.963 [2024-12-06 17:47:56.917754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.917768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.917775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.917782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.917795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.927724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.927773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.927786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.927793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.927801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.927815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.937792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.937852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.937865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.937872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.937879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.937892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.947827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.947887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.947900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.947907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.947914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.947927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.957712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.957764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.957778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.957785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.957792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.957806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.967816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.967873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.967888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.967895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.967902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.967916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.977889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.977942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.977955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.977963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.977969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.977983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.987921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.987972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.987986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.987993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.987999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.988013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:56.997891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:56.997983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:56.997997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:56.998007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:56.998014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:56.998027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.964 qpair failed and we were unable to recover it. 00:32:04.964 [2024-12-06 17:47:57.007924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.964 [2024-12-06 17:47:57.007983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.964 [2024-12-06 17:47:57.007996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.964 [2024-12-06 17:47:57.008004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.964 [2024-12-06 17:47:57.008010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.964 [2024-12-06 17:47:57.008024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.965 qpair failed and we were unable to recover it. 00:32:04.965 [2024-12-06 17:47:57.018021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:04.965 [2024-12-06 17:47:57.018073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:04.965 [2024-12-06 17:47:57.018087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:04.965 [2024-12-06 17:47:57.018094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:04.965 [2024-12-06 17:47:57.018100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:04.965 [2024-12-06 17:47:57.018114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:04.965 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.028082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.028173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.028187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.028194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.028200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.028214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.038012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.038058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.038073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.038080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.038087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.038104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.048035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.048094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.048109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.048116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.048127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.048141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.058150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.058227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.058241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.058248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.058255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.058268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.068148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.068202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.068216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.068224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.068230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.068243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.078135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.078198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.078211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.078219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.078225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.078238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.088115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.088162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.088176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.088183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.088189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.088203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.098198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.098263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.098278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.098287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.098294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.098311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.108242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.108296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.108311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.108319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.108326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.108343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.118207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.118254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.228 [2024-12-06 17:47:57.118268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.228 [2024-12-06 17:47:57.118276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.228 [2024-12-06 17:47:57.118282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.228 [2024-12-06 17:47:57.118296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.228 qpair failed and we were unable to recover it. 00:32:05.228 [2024-12-06 17:47:57.128246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.228 [2024-12-06 17:47:57.128298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.128317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.128324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.128331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.128346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.138306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.138357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.138372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.138379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.138386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.138400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.148355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.148415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.148428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.148436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.148442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.148456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.158351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.158401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.158417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.158424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.158430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.158445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.168369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.168439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.168453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.168460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.168467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.168485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.178493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.178563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.178588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.178597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.178604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.178624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.188476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.188535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.188561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.188570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.188577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.188596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.198413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.198463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.198488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.198497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.198504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.198524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.208467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.208519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.208535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.208542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.208549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.208564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.218527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.218585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.218599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.218607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.218613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.218627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.228586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.228643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.228657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.228664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.228671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.228685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.238535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.238584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.238598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.238605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.238611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.238625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.248577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.229 [2024-12-06 17:47:57.248626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.229 [2024-12-06 17:47:57.248645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.229 [2024-12-06 17:47:57.248652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.229 [2024-12-06 17:47:57.248658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.229 [2024-12-06 17:47:57.248673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.229 qpair failed and we were unable to recover it. 00:32:05.229 [2024-12-06 17:47:57.258659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.230 [2024-12-06 17:47:57.258716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.230 [2024-12-06 17:47:57.258733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.230 [2024-12-06 17:47:57.258740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.230 [2024-12-06 17:47:57.258747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.230 [2024-12-06 17:47:57.258761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.230 qpair failed and we were unable to recover it. 00:32:05.230 [2024-12-06 17:47:57.268685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.230 [2024-12-06 17:47:57.268740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.230 [2024-12-06 17:47:57.268754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.230 [2024-12-06 17:47:57.268761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.230 [2024-12-06 17:47:57.268767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.230 [2024-12-06 17:47:57.268781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.230 qpair failed and we were unable to recover it. 00:32:05.230 [2024-12-06 17:47:57.278671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.230 [2024-12-06 17:47:57.278716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.230 [2024-12-06 17:47:57.278729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.230 [2024-12-06 17:47:57.278736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.230 [2024-12-06 17:47:57.278743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.230 [2024-12-06 17:47:57.278756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.230 qpair failed and we were unable to recover it. 00:32:05.230 [2024-12-06 17:47:57.288696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.230 [2024-12-06 17:47:57.288744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.230 [2024-12-06 17:47:57.288757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.230 [2024-12-06 17:47:57.288765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.230 [2024-12-06 17:47:57.288771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.230 [2024-12-06 17:47:57.288785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.230 qpair failed and we were unable to recover it. 00:32:05.493 [2024-12-06 17:47:57.298760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.493 [2024-12-06 17:47:57.298816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.493 [2024-12-06 17:47:57.298829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.493 [2024-12-06 17:47:57.298837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.493 [2024-12-06 17:47:57.298843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.493 [2024-12-06 17:47:57.298860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.493 qpair failed and we were unable to recover it. 00:32:05.493 [2024-12-06 17:47:57.308817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.493 [2024-12-06 17:47:57.308867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.493 [2024-12-06 17:47:57.308880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.493 [2024-12-06 17:47:57.308888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.493 [2024-12-06 17:47:57.308894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.493 [2024-12-06 17:47:57.308908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.493 qpair failed and we were unable to recover it. 00:32:05.493 [2024-12-06 17:47:57.318776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.493 [2024-12-06 17:47:57.318825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.493 [2024-12-06 17:47:57.318840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.493 [2024-12-06 17:47:57.318848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.493 [2024-12-06 17:47:57.318855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.493 [2024-12-06 17:47:57.318873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.493 qpair failed and we were unable to recover it. 00:32:05.493 [2024-12-06 17:47:57.328693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.493 [2024-12-06 17:47:57.328760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.493 [2024-12-06 17:47:57.328774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.493 [2024-12-06 17:47:57.328782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.493 [2024-12-06 17:47:57.328788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.493 [2024-12-06 17:47:57.328802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.493 qpair failed and we were unable to recover it. 00:32:05.493 [2024-12-06 17:47:57.338890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.493 [2024-12-06 17:47:57.338943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.493 [2024-12-06 17:47:57.338957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.493 [2024-12-06 17:47:57.338965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.493 [2024-12-06 17:47:57.338971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.493 [2024-12-06 17:47:57.338985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.493 qpair failed and we were unable to recover it. 00:32:05.493 [2024-12-06 17:47:57.348943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.348997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.349011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.349018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.349024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.349038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.358762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.358808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.358821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.358828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.358835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.358848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.368900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.368950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.368963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.368971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.368977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.368991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.378985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.379046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.379061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.379069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.379078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.379093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.388909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.388969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.388986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.388994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.389000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.389014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.398993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.399047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.399061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.399069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.399075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.399089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.409046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.409094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.409109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.409116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.409122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.409136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.419087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.419164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.419177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.419185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.419191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.419204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.429127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.429181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.429194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.429202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.429208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.429225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.439134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.439216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.439230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.439237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.439243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.439257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.449161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.449207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.449221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.449228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.449234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.449248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.459236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.459293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.459307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.459315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.459321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.494 [2024-12-06 17:47:57.459335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.494 qpair failed and we were unable to recover it. 00:32:05.494 [2024-12-06 17:47:57.469254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.494 [2024-12-06 17:47:57.469307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.494 [2024-12-06 17:47:57.469321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.494 [2024-12-06 17:47:57.469329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.494 [2024-12-06 17:47:57.469335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.469349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.479190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.479239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.479252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.479259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.479265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.479279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.489259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.489306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.489319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.489327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.489333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.489346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.499304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.499362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.499379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.499387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.499393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.499408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.509328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.509385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.509399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.509406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.509412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.509426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.519337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.519398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.519414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.519422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.519428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.519442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.529363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.529435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.529449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.529456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.529462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.529476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.539449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.539510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.539535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.539543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.539550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.539569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.495 [2024-12-06 17:47:57.549472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.495 [2024-12-06 17:47:57.549533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.495 [2024-12-06 17:47:57.549550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.495 [2024-12-06 17:47:57.549558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.495 [2024-12-06 17:47:57.549569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.495 [2024-12-06 17:47:57.549584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.495 qpair failed and we were unable to recover it. 00:32:05.757 [2024-12-06 17:47:57.559426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.757 [2024-12-06 17:47:57.559475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.757 [2024-12-06 17:47:57.559490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.757 [2024-12-06 17:47:57.559497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.757 [2024-12-06 17:47:57.559504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.757 [2024-12-06 17:47:57.559529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.757 qpair failed and we were unable to recover it. 00:32:05.757 [2024-12-06 17:47:57.569484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.757 [2024-12-06 17:47:57.569537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.757 [2024-12-06 17:47:57.569552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.757 [2024-12-06 17:47:57.569559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.757 [2024-12-06 17:47:57.569565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.757 [2024-12-06 17:47:57.569579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.757 qpair failed and we were unable to recover it. 00:32:05.757 [2024-12-06 17:47:57.579551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.757 [2024-12-06 17:47:57.579611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.757 [2024-12-06 17:47:57.579627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.757 [2024-12-06 17:47:57.579634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.757 [2024-12-06 17:47:57.579646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.757 [2024-12-06 17:47:57.579663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.589579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.589631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.589647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.589655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.589662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.589675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.599556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.599603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.599617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.599624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.599631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.599648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.609556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.609605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.609620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.609627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.609633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.609651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.619669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.619726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.619739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.619746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.619752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.619766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.629684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.629743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.629757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.629765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.629772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.629790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.639646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.639694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.639708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.639716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.639722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.639736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.649684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.649733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.649750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.649758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.649764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.649778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.659645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.659702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.659717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.659725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.659731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.659750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.669805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.669858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.669873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.669880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.669887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.669901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.679775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.679823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.679837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.679844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.679850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.679864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.689805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.689854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.689867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.689875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.689884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.758 [2024-12-06 17:47:57.689898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.758 qpair failed and we were unable to recover it. 00:32:05.758 [2024-12-06 17:47:57.699883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.758 [2024-12-06 17:47:57.699939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.758 [2024-12-06 17:47:57.699952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.758 [2024-12-06 17:47:57.699960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.758 [2024-12-06 17:47:57.699966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.699980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.709922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.709993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.710006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.710014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.710020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.710034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.719887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.719935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.719949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.719956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.719962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.719975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.729907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.729955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.729969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.729976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.729983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.729996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.739989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.740045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.740060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.740067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.740074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.740092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.750030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.750079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.750093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.750100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.750107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.750121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.759879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.759940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.759954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.759961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.759968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.759982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.770011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.770061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.770075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.770082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.770088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.770102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.780078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.780133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.780150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.780157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.780164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.780177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.790095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.790150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.790164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.790171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.790178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.790191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.800103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.800153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.800166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.800173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.800180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.800193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.810135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.810183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.810196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.759 [2024-12-06 17:47:57.810204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.759 [2024-12-06 17:47:57.810210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.759 [2024-12-06 17:47:57.810224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.759 qpair failed and we were unable to recover it. 00:32:05.759 [2024-12-06 17:47:57.820185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:05.759 [2024-12-06 17:47:57.820241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:05.759 [2024-12-06 17:47:57.820254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:05.760 [2024-12-06 17:47:57.820261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.760 [2024-12-06 17:47:57.820271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:05.760 [2024-12-06 17:47:57.820285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:05.760 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.830230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.830282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.830296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.830303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.830310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.830323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.840209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.840260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.840273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.840280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.840287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.840300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.850229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.850276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.850290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.850297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.850303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.850317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.860293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.860350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.860363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.860370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.860376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.860390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.870311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.870364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.870379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.870386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.870393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.870407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.880288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.880336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.880350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.880357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.880363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.880377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.890361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.890416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.890441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.890450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.890457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.890476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.900415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.021 [2024-12-06 17:47:57.900475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.021 [2024-12-06 17:47:57.900500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.021 [2024-12-06 17:47:57.900508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.021 [2024-12-06 17:47:57.900516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.021 [2024-12-06 17:47:57.900535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.021 qpair failed and we were unable to recover it. 00:32:06.021 [2024-12-06 17:47:57.910454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.910505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.910525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.910532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.910539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.910554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.920433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.920492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.920507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.920514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.920523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.920538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.930456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.930514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.930528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.930535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.930542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.930556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.940546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.940603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.940617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.940624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.940632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.940651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.950577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.950673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.950688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.950696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.950707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.950721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.960601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.960662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.960676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.960684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.960690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.960704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.970583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.970646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.970660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.970668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.970675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.970688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.980665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.980768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.980782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.980789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.980796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.980810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:57.990683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:57.990740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:57.990754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:57.990761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:57.990768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:57.990782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:58.000629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.022 [2024-12-06 17:47:58.000684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.022 [2024-12-06 17:47:58.000700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.022 [2024-12-06 17:47:58.000707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.022 [2024-12-06 17:47:58.000714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.022 [2024-12-06 17:47:58.000728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.022 qpair failed and we were unable to recover it. 00:32:06.022 [2024-12-06 17:47:58.010683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.010729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.010743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.010750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.010757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.010771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.020735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.020797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.020811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.020818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.020824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.020838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.030701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.030759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.030772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.030779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.030786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.030799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.040771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.040818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.040835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.040842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.040849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.040863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.050791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.050839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.050853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.050860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.050866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.050880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.060840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.060902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.060916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.060923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.060930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.060943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.070879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.070931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.070945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.070953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.070959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.070974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.023 [2024-12-06 17:47:58.080879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.023 [2024-12-06 17:47:58.080932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.023 [2024-12-06 17:47:58.080946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.023 [2024-12-06 17:47:58.080953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.023 [2024-12-06 17:47:58.080963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.023 [2024-12-06 17:47:58.080977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.023 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.090874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.090925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.286 [2024-12-06 17:47:58.090939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.286 [2024-12-06 17:47:58.090947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.286 [2024-12-06 17:47:58.090953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.286 [2024-12-06 17:47:58.090967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.286 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.100972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.101026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.286 [2024-12-06 17:47:58.101039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.286 [2024-12-06 17:47:58.101047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.286 [2024-12-06 17:47:58.101054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.286 [2024-12-06 17:47:58.101067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.286 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.111023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.111081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.286 [2024-12-06 17:47:58.111095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.286 [2024-12-06 17:47:58.111102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.286 [2024-12-06 17:47:58.111109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.286 [2024-12-06 17:47:58.111122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.286 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.120966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.121053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.286 [2024-12-06 17:47:58.121068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.286 [2024-12-06 17:47:58.121077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.286 [2024-12-06 17:47:58.121083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.286 [2024-12-06 17:47:58.121098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.286 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.131014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.131062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.286 [2024-12-06 17:47:58.131076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.286 [2024-12-06 17:47:58.131083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.286 [2024-12-06 17:47:58.131090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.286 [2024-12-06 17:47:58.131103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.286 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.141075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.141178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.286 [2024-12-06 17:47:58.141193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.286 [2024-12-06 17:47:58.141200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.286 [2024-12-06 17:47:58.141207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.286 [2024-12-06 17:47:58.141221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.286 qpair failed and we were unable to recover it. 00:32:06.286 [2024-12-06 17:47:58.151112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.286 [2024-12-06 17:47:58.151166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.151181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.151189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.151195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.151210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.161105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.161154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.161168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.161175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.161181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.161195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.171151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.171198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.171215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.171222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.171229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.171243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.181194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.181250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.181263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.181271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.181277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.181291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.191243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.191297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.191311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.191318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.191325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.191338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.201196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.201265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.201279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.201286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.201292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.201307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.211223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.211269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.211282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.211289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.211299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.211313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.221245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.221296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.221310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.221317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.221324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.221337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.231319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.231373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.231386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.231393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.231400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.231414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.241290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.241337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.241351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.287 [2024-12-06 17:47:58.241358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.287 [2024-12-06 17:47:58.241364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.287 [2024-12-06 17:47:58.241378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.287 qpair failed and we were unable to recover it. 00:32:06.287 [2024-12-06 17:47:58.251199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.287 [2024-12-06 17:47:58.251265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.287 [2024-12-06 17:47:58.251279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.251286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.251292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.251306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.261345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.261406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.261420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.261427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.261433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.261447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.271421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.271473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.271498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.271507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.271514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.271533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.281398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.281459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.281484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.281493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.281500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.281520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.291434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.291484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.291509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.291518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.291525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.291544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.301516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.301568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.301589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.301597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.301603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.301619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.311551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.311628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.311646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.311653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.311660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.311675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.321535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.321580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.321594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.321601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.321608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.321622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.331550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.331595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.331609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.331616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.331623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.331640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.288 [2024-12-06 17:47:58.341591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.288 [2024-12-06 17:47:58.341641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.288 [2024-12-06 17:47:58.341655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.288 [2024-12-06 17:47:58.341663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.288 [2024-12-06 17:47:58.341673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.288 [2024-12-06 17:47:58.341687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.288 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.351665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.351717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.351731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.351738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.351745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.351759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.361679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.361749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.361762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.361770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.361776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.361791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.371617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.371661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.371675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.371683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.371689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.371703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.381680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.381728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.381741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.381748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.381755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.381769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.391735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.391783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.391796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.391804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.391810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.391824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.401723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.401766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.401780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.401787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.401794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.401807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.411729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.411773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.411786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.411793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.411800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.411813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.421789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.421837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.421851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.421858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.421864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.421878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.431759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.431814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.431831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.431838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.431845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.431858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.441845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.441894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.441908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.552 [2024-12-06 17:47:58.441915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.552 [2024-12-06 17:47:58.441922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.552 [2024-12-06 17:47:58.441935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.552 qpair failed and we were unable to recover it. 00:32:06.552 [2024-12-06 17:47:58.451855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.552 [2024-12-06 17:47:58.451903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.552 [2024-12-06 17:47:58.451916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.451923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.451930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.451943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.461897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.461943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.461956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.461963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.461969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.461983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.471965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.472026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.472040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.472047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.472057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.472071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.481956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.482032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.482045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.482053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.482060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.482074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.491949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.492039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.492054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.492061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.492068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.492083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.502008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.502053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.502067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.502074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.502080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.502094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.512081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.512168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.512181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.512189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.512195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.512209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.522033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.522073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.522086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.522094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.522100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.522114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.532080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.532122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.532136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.532143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.532150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.532163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.542111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.542154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.542167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.542174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.542181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.542195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.552180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.552231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.552244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.552252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.552258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.552271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.562036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.562082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.562100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.562107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.562113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.562127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.572190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.572235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.572249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.553 [2024-12-06 17:47:58.572256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.553 [2024-12-06 17:47:58.572262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.553 [2024-12-06 17:47:58.572276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.553 qpair failed and we were unable to recover it. 00:32:06.553 [2024-12-06 17:47:58.582259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.553 [2024-12-06 17:47:58.582346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.553 [2024-12-06 17:47:58.582360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.554 [2024-12-06 17:47:58.582367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.554 [2024-12-06 17:47:58.582374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.554 [2024-12-06 17:47:58.582387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.554 qpair failed and we were unable to recover it. 00:32:06.554 [2024-12-06 17:47:58.592286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.554 [2024-12-06 17:47:58.592386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.554 [2024-12-06 17:47:58.592401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.554 [2024-12-06 17:47:58.592408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.554 [2024-12-06 17:47:58.592415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.554 [2024-12-06 17:47:58.592429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.554 qpair failed and we were unable to recover it. 00:32:06.554 [2024-12-06 17:47:58.602282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.554 [2024-12-06 17:47:58.602329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.554 [2024-12-06 17:47:58.602342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.554 [2024-12-06 17:47:58.602350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.554 [2024-12-06 17:47:58.602359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.554 [2024-12-06 17:47:58.602374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.554 qpair failed and we were unable to recover it. 00:32:06.554 [2024-12-06 17:47:58.612300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.554 [2024-12-06 17:47:58.612343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.554 [2024-12-06 17:47:58.612356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.554 [2024-12-06 17:47:58.612363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.554 [2024-12-06 17:47:58.612370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.554 [2024-12-06 17:47:58.612383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.554 qpair failed and we were unable to recover it. 00:32:06.815 [2024-12-06 17:47:58.622308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.815 [2024-12-06 17:47:58.622354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.815 [2024-12-06 17:47:58.622367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.815 [2024-12-06 17:47:58.622375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.815 [2024-12-06 17:47:58.622382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.815 [2024-12-06 17:47:58.622395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.815 qpair failed and we were unable to recover it. 00:32:06.815 [2024-12-06 17:47:58.632398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.815 [2024-12-06 17:47:58.632480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.815 [2024-12-06 17:47:58.632493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.815 [2024-12-06 17:47:58.632501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.815 [2024-12-06 17:47:58.632508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.815 [2024-12-06 17:47:58.632522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.815 qpair failed and we were unable to recover it. 00:32:06.815 [2024-12-06 17:47:58.642365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.815 [2024-12-06 17:47:58.642412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.815 [2024-12-06 17:47:58.642425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.815 [2024-12-06 17:47:58.642433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.815 [2024-12-06 17:47:58.642439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.815 [2024-12-06 17:47:58.642453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.815 qpair failed and we were unable to recover it. 00:32:06.815 [2024-12-06 17:47:58.652411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.815 [2024-12-06 17:47:58.652454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.815 [2024-12-06 17:47:58.652468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.815 [2024-12-06 17:47:58.652476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.815 [2024-12-06 17:47:58.652482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.815 [2024-12-06 17:47:58.652495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.815 qpair failed and we were unable to recover it. 00:32:06.815 [2024-12-06 17:47:58.662459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.815 [2024-12-06 17:47:58.662505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.815 [2024-12-06 17:47:58.662519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.815 [2024-12-06 17:47:58.662526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.815 [2024-12-06 17:47:58.662533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.815 [2024-12-06 17:47:58.662546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.815 qpair failed and we were unable to recover it. 00:32:06.815 [2024-12-06 17:47:58.672472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.815 [2024-12-06 17:47:58.672536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.815 [2024-12-06 17:47:58.672549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.672557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.672563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.672576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.682497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.682546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.682560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.682567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.682574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.682587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.692523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.692607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.692626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.692633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.692643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.692657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.702533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.702615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.702629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.702641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.702648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.702662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.712464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.712510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.712524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.712531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.712538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.712551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.722610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.722657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.722671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.722678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.722684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.722698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.732584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.732634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.732650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.732657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.732668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.732681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.742627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.742706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.742720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.742728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.742734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.742748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.752694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.752748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.752761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.752768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.752775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.752789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.762706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.762757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.762771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.762778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.762784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.762799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.772728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.772769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.772783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.772791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.772797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.772810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.782740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.782788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.782802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.782809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.782815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23af0c0 00:32:06.816 [2024-12-06 17:47:58.782829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:06.816 qpair failed and we were unable to recover it. 00:32:06.816 [2024-12-06 17:47:58.792803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.816 [2024-12-06 17:47:58.792951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.816 [2024-12-06 17:47:58.793019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.816 [2024-12-06 17:47:58.793046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.816 [2024-12-06 17:47:58.793068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc280000b90 00:32:06.817 [2024-12-06 17:47:58.793124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:06.817 qpair failed and we were unable to recover it. 00:32:06.817 [2024-12-06 17:47:58.802833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:06.817 [2024-12-06 17:47:58.802900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:06.817 [2024-12-06 17:47:58.802931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:06.817 [2024-12-06 17:47:58.802948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:06.817 [2024-12-06 17:47:58.802962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc280000b90 00:32:06.817 [2024-12-06 17:47:58.802994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:06.817 qpair failed and we were unable to recover it. 00:32:06.817 [2024-12-06 17:47:58.803153] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:32:06.817 A controller has encountered a failure and is being reset. 00:32:06.817 [2024-12-06 17:47:58.803259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a4e10 (9): Bad file descriptor 00:32:06.817 Controller properly reset. 00:32:06.817 Initializing NVMe Controllers 00:32:06.817 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:06.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:06.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:06.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:06.817 Initialization complete. Launching workers. 00:32:06.817 Starting thread on core 1 00:32:06.817 Starting thread on core 2 00:32:06.817 Starting thread on core 3 00:32:06.817 Starting thread on core 0 00:32:06.817 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:06.817 00:32:06.817 real 0m11.449s 00:32:06.817 user 0m21.800s 00:32:06.817 sys 0m3.998s 00:32:06.817 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.817 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:06.817 ************************************ 00:32:06.817 END TEST nvmf_target_disconnect_tc2 00:32:06.817 ************************************ 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.078 rmmod nvme_tcp 00:32:07.078 rmmod nvme_fabrics 00:32:07.078 rmmod nvme_keyring 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1735153 ']' 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1735153 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1735153 ']' 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1735153 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.078 17:47:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735153 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735153' 00:32:07.078 killing process with pid 1735153 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1735153 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1735153 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.078 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.339 17:47:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.252 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.252 00:32:09.252 real 0m21.665s 00:32:09.252 user 0m49.615s 00:32:09.252 sys 0m10.030s 00:32:09.252 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.252 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:09.252 ************************************ 00:32:09.252 END TEST nvmf_target_disconnect 00:32:09.252 ************************************ 00:32:09.252 17:48:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:09.252 00:32:09.252 real 6m31.010s 00:32:09.252 user 11m31.803s 00:32:09.252 sys 2m14.201s 00:32:09.252 17:48:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.252 17:48:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.252 ************************************ 00:32:09.252 END TEST nvmf_host 00:32:09.252 ************************************ 00:32:09.252 17:48:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:09.252 17:48:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:09.252 17:48:01 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:09.252 17:48:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.252 17:48:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.252 17:48:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.512 ************************************ 00:32:09.512 START TEST nvmf_target_core_interrupt_mode 00:32:09.512 ************************************ 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:09.512 * Looking for test storage... 00:32:09.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.512 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:09.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.512 --rc genhtml_branch_coverage=1 00:32:09.512 --rc genhtml_function_coverage=1 00:32:09.512 --rc genhtml_legend=1 00:32:09.512 --rc geninfo_all_blocks=1 00:32:09.512 --rc geninfo_unexecuted_blocks=1 00:32:09.513 00:32:09.513 ' 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.513 --rc genhtml_branch_coverage=1 00:32:09.513 --rc genhtml_function_coverage=1 00:32:09.513 --rc genhtml_legend=1 00:32:09.513 --rc geninfo_all_blocks=1 00:32:09.513 --rc geninfo_unexecuted_blocks=1 00:32:09.513 00:32:09.513 ' 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.513 --rc genhtml_branch_coverage=1 00:32:09.513 --rc genhtml_function_coverage=1 00:32:09.513 --rc genhtml_legend=1 00:32:09.513 --rc geninfo_all_blocks=1 00:32:09.513 --rc geninfo_unexecuted_blocks=1 00:32:09.513 00:32:09.513 ' 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.513 --rc genhtml_branch_coverage=1 00:32:09.513 --rc genhtml_function_coverage=1 00:32:09.513 --rc genhtml_legend=1 00:32:09.513 --rc geninfo_all_blocks=1 00:32:09.513 --rc geninfo_unexecuted_blocks=1 00:32:09.513 00:32:09.513 ' 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.513 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:09.774 ************************************ 00:32:09.774 START TEST nvmf_abort 00:32:09.774 ************************************ 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:09.774 * Looking for test storage... 00:32:09.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:09.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.774 --rc genhtml_branch_coverage=1 00:32:09.774 --rc genhtml_function_coverage=1 00:32:09.774 --rc genhtml_legend=1 00:32:09.774 --rc geninfo_all_blocks=1 00:32:09.774 --rc geninfo_unexecuted_blocks=1 00:32:09.774 00:32:09.774 ' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:09.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.774 --rc genhtml_branch_coverage=1 00:32:09.774 --rc genhtml_function_coverage=1 00:32:09.774 --rc genhtml_legend=1 00:32:09.774 --rc geninfo_all_blocks=1 00:32:09.774 --rc geninfo_unexecuted_blocks=1 00:32:09.774 00:32:09.774 ' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:09.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.774 --rc genhtml_branch_coverage=1 00:32:09.774 --rc genhtml_function_coverage=1 00:32:09.774 --rc genhtml_legend=1 00:32:09.774 --rc geninfo_all_blocks=1 00:32:09.774 --rc geninfo_unexecuted_blocks=1 00:32:09.774 00:32:09.774 ' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:09.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.774 --rc genhtml_branch_coverage=1 00:32:09.774 --rc genhtml_function_coverage=1 00:32:09.774 --rc genhtml_legend=1 00:32:09.774 --rc geninfo_all_blocks=1 00:32:09.774 --rc geninfo_unexecuted_blocks=1 00:32:09.774 00:32:09.774 ' 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.774 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.035 17:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:18.168 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.168 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:18.169 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:18.169 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:18.169 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.169 17:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:32:18.169 00:32:18.169 --- 10.0.0.2 ping statistics --- 00:32:18.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.169 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:32:18.169 00:32:18.169 --- 10.0.0.1 ping statistics --- 00:32:18.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.169 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1738068 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1738068 00:32:18.169 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:18.170 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1738068 ']' 00:32:18.170 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.170 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.170 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.170 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.170 17:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.170 [2024-12-06 17:48:09.369784] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:18.170 [2024-12-06 17:48:09.370951] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:32:18.170 [2024-12-06 17:48:09.371006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.170 [2024-12-06 17:48:09.470203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:18.170 [2024-12-06 17:48:09.522295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.170 [2024-12-06 17:48:09.522344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.170 [2024-12-06 17:48:09.522353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.170 [2024-12-06 17:48:09.522361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.170 [2024-12-06 17:48:09.522368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.170 [2024-12-06 17:48:09.524149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.170 [2024-12-06 17:48:09.524311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.170 [2024-12-06 17:48:09.524312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.170 [2024-12-06 17:48:09.603285] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:18.170 [2024-12-06 17:48:09.604401] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:18.170 [2024-12-06 17:48:09.604805] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:18.170 [2024-12-06 17:48:09.604932] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.170 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 [2024-12-06 17:48:10.233209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 Malloc0 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 Delay0 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 [2024-12-06 17:48:10.341136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.431 17:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:18.431 [2024-12-06 17:48:10.451548] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:20.976 Initializing NVMe Controllers 00:32:20.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:20.976 controller IO queue size 128 less than required 00:32:20.976 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:20.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:20.976 Initialization complete. Launching workers. 00:32:20.976 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 28573 00:32:20.976 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28631, failed to submit 66 00:32:20.976 success 28573, unsuccessful 58, failed 0 00:32:20.976 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.977 rmmod nvme_tcp 00:32:20.977 rmmod nvme_fabrics 00:32:20.977 rmmod nvme_keyring 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1738068 ']' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1738068 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1738068 ']' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1738068 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738068 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738068' 00:32:20.977 killing process with pid 1738068 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1738068 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1738068 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.977 17:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.890 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.890 00:32:22.890 real 0m13.316s 00:32:22.890 user 0m10.846s 00:32:22.890 sys 0m6.913s 00:32:22.890 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.890 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.890 ************************************ 00:32:22.890 END TEST nvmf_abort 00:32:22.890 ************************************ 00:32:23.149 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:23.149 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:23.149 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.149 17:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:23.149 ************************************ 00:32:23.149 START TEST nvmf_ns_hotplug_stress 00:32:23.149 ************************************ 00:32:23.149 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:23.149 * Looking for test storage... 00:32:23.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:23.149 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:23.149 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:32:23.149 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.410 --rc genhtml_branch_coverage=1 00:32:23.410 --rc genhtml_function_coverage=1 00:32:23.410 --rc genhtml_legend=1 00:32:23.410 --rc geninfo_all_blocks=1 00:32:23.410 --rc geninfo_unexecuted_blocks=1 00:32:23.410 00:32:23.410 ' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.410 --rc genhtml_branch_coverage=1 00:32:23.410 --rc genhtml_function_coverage=1 00:32:23.410 --rc genhtml_legend=1 00:32:23.410 --rc geninfo_all_blocks=1 00:32:23.410 --rc geninfo_unexecuted_blocks=1 00:32:23.410 00:32:23.410 ' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.410 --rc genhtml_branch_coverage=1 00:32:23.410 --rc genhtml_function_coverage=1 00:32:23.410 --rc genhtml_legend=1 00:32:23.410 --rc geninfo_all_blocks=1 00:32:23.410 --rc geninfo_unexecuted_blocks=1 00:32:23.410 00:32:23.410 ' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.410 --rc genhtml_branch_coverage=1 00:32:23.410 --rc genhtml_function_coverage=1 00:32:23.410 --rc genhtml_legend=1 00:32:23.410 --rc geninfo_all_blocks=1 00:32:23.410 --rc geninfo_unexecuted_blocks=1 00:32:23.410 00:32:23.410 ' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.410 17:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:31.548 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:31.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:31.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:31.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:31.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:31.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:32:31.549 00:32:31.549 --- 10.0.0.2 ping statistics --- 00:32:31.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.549 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:32:31.549 00:32:31.549 --- 10.0.0.1 ping statistics --- 00:32:31.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.549 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1740830 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1740830 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1740830 ']' 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.549 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.550 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.550 17:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:31.550 [2024-12-06 17:48:22.709134] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.550 [2024-12-06 17:48:22.710274] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:32:31.550 [2024-12-06 17:48:22.710327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.550 [2024-12-06 17:48:22.809130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:31.550 [2024-12-06 17:48:22.860557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.550 [2024-12-06 17:48:22.860610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.550 [2024-12-06 17:48:22.860620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.550 [2024-12-06 17:48:22.860627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.550 [2024-12-06 17:48:22.860633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.550 [2024-12-06 17:48:22.862414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.550 [2024-12-06 17:48:22.862579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.550 [2024-12-06 17:48:22.862580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:31.550 [2024-12-06 17:48:22.940579] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.550 [2024-12-06 17:48:22.941805] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:31.550 [2024-12-06 17:48:22.942083] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.550 [2024-12-06 17:48:22.942221] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:31.550 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:31.811 [2024-12-06 17:48:23.731461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.811 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:32.071 17:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.071 [2024-12-06 17:48:24.116117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.071 17:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.333 17:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:32.595 Malloc0 00:32:32.595 17:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:32.857 Delay0 00:32:32.857 17:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.857 17:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:33.119 NULL1 00:32:33.119 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:33.381 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1740892 00:32:33.381 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:33.381 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:33.381 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.641 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.641 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:33.641 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:33.901 true 00:32:33.901 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:33.901 17:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.161 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.423 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:34.423 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:34.423 true 00:32:34.423 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:34.423 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.684 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.945 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:34.945 17:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:35.205 true 00:32:35.205 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:35.205 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.467 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.467 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:35.467 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:35.727 true 00:32:35.727 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:35.727 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.988 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.988 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:35.988 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:36.248 true 00:32:36.248 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:36.248 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.509 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.770 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:36.770 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:36.770 true 00:32:36.771 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:36.771 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.029 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.290 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:37.290 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:37.290 true 00:32:37.290 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:37.290 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.573 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.833 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:37.833 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:37.833 true 00:32:37.833 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:37.833 17:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.092 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.351 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:38.351 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:38.351 true 00:32:38.611 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:38.611 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.611 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.870 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:38.870 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:39.130 true 00:32:39.130 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:39.130 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.130 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.389 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:39.389 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:39.648 true 00:32:39.648 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:39.648 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.907 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.907 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:39.907 17:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:40.166 true 00:32:40.166 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:40.166 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.427 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.427 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:40.427 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:40.687 true 00:32:40.687 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:40.687 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.947 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.207 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:41.207 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:41.207 true 00:32:41.207 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:41.207 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.467 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.727 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:41.727 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:41.727 true 00:32:41.727 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:41.727 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.986 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.246 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:42.246 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:42.246 true 00:32:42.507 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:42.507 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.507 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.768 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:42.768 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:43.028 true 00:32:43.028 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:43.028 17:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.028 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.287 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:43.287 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:43.546 true 00:32:43.546 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:43.546 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.806 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.806 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:43.806 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:44.066 true 00:32:44.066 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:44.066 17:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.326 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:44.326 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:44.326 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:44.586 true 00:32:44.586 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:44.586 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.846 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.105 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:45.105 17:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:45.105 true 00:32:45.105 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:45.105 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.365 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.625 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:45.625 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:45.625 true 00:32:45.625 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:45.625 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.885 17:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.145 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:46.145 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:46.406 true 00:32:46.406 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:46.406 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.406 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.666 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:46.666 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:46.926 true 00:32:46.926 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:46.926 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.926 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.185 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:47.185 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:47.444 true 00:32:47.444 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:47.444 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.703 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.703 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:47.703 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:47.963 true 00:32:47.963 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:47.963 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.223 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.483 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:48.483 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:48.483 true 00:32:48.483 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:48.483 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.742 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.002 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:49.002 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:49.002 true 00:32:49.002 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:49.002 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.262 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.523 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:49.523 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:49.523 true 00:32:49.523 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:49.523 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.784 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.045 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:50.045 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:50.045 true 00:32:50.306 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:50.306 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.306 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.567 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:50.567 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:50.828 true 00:32:50.828 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:50.828 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.828 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.090 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:51.090 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:51.351 true 00:32:51.351 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:51.351 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.612 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.612 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:32:51.612 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:51.872 true 00:32:51.872 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:51.872 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.133 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.133 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:52.133 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:52.394 true 00:32:52.394 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:52.394 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.679 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.679 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:32:52.679 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:32:52.974 true 00:32:52.974 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:52.974 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.270 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.270 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:32:53.270 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:32:53.536 true 00:32:53.536 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:53.536 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.795 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.795 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:32:53.795 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:32:54.055 true 00:32:54.055 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:54.055 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.315 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.574 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:32:54.574 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:32:54.574 true 00:32:54.574 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:54.574 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.833 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.093 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:32:55.093 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:32:55.093 true 00:32:55.093 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:55.093 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.353 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.612 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:32:55.612 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:32:55.871 true 00:32:55.871 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:55.871 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.871 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.130 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:32:56.130 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:32:56.389 true 00:32:56.389 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:56.389 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.649 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.649 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:32:56.649 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:32:56.908 true 00:32:56.908 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:56.908 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.168 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.168 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:32:57.168 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:32:57.428 true 00:32:57.428 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:57.428 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.688 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.688 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:32:57.688 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:32:57.949 true 00:32:57.949 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:57.949 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.210 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.471 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:32:58.471 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:32:58.471 true 00:32:58.471 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:58.471 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.731 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.991 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:32:58.991 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:32:58.991 true 00:32:58.991 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:58.991 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.251 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.510 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:32:59.510 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:32:59.510 true 00:32:59.510 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:32:59.510 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.770 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.030 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:33:00.030 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:33:00.291 true 00:33:00.291 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:00.291 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.291 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.551 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:33:00.551 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:33:00.811 true 00:33:00.811 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:00.811 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.811 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.071 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:33:01.071 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:33:01.331 true 00:33:01.331 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:01.331 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.608 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.608 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:33:01.608 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:33:01.869 true 00:33:01.869 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:01.869 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.869 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.130 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:33:02.130 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:33:02.391 true 00:33:02.391 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:02.391 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.652 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.652 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:33:02.652 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:33:02.911 true 00:33:02.911 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:02.911 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.173 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.173 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:33:03.173 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:33:03.432 true 00:33:03.432 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:03.432 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.691 Initializing NVMe Controllers 00:33:03.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:03.691 Controller IO queue size 128, less than required. 00:33:03.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:03.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:03.691 Initialization complete. Launching workers. 00:33:03.691 ======================================================== 00:33:03.691 Latency(us) 00:33:03.691 Device Information : IOPS MiB/s Average min max 00:33:03.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30552.03 14.92 4189.53 1097.19 11340.06 00:33:03.691 ======================================================== 00:33:03.691 Total : 30552.03 14.92 4189.53 1097.19 11340.06 00:33:03.691 00:33:03.691 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.951 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:33:03.951 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:33:03.951 true 00:33:03.951 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1740892 00:33:03.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1740892) - No such process 00:33:03.951 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1740892 00:33:03.951 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.211 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:04.471 null0 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.471 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:04.731 null1 00:33:04.731 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:04.731 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.731 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:04.991 null2 00:33:04.991 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:04.991 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.991 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:04.991 null3 00:33:04.991 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:04.991 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:04.991 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:05.251 null4 00:33:05.251 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.251 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.251 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:05.511 null5 00:33:05.512 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.512 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.512 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:05.512 null6 00:33:05.512 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.512 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.512 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:05.773 null7 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:05.773 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1741435 1741436 1741439 1741440 1741442 1741444 1741446 1741448 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:05.774 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.035 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:06.035 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.035 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.036 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:06.298 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:06.560 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:06.820 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:07.079 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.079 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:07.338 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.597 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:07.856 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.856 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.856 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:07.856 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.856 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:07.857 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.117 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:08.117 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:08.118 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:08.378 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.378 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.379 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.640 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:08.901 17:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:09.161 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.162 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.162 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:09.162 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.421 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.422 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.422 rmmod nvme_tcp 00:33:09.680 rmmod nvme_fabrics 00:33:09.680 rmmod nvme_keyring 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1740830 ']' 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1740830 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1740830 ']' 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1740830 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740830 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740830' 00:33:09.680 killing process with pid 1740830 00:33:09.680 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1740830 00:33:09.681 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1740830 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.941 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.849 00:33:11.849 real 0m48.802s 00:33:11.849 user 3m2.445s 00:33:11.849 sys 0m21.926s 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:11.849 ************************************ 00:33:11.849 END TEST nvmf_ns_hotplug_stress 00:33:11.849 ************************************ 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.849 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.110 ************************************ 00:33:12.110 START TEST nvmf_delete_subsystem 00:33:12.110 ************************************ 00:33:12.110 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:12.110 * Looking for test storage... 00:33:12.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:12.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.110 --rc genhtml_branch_coverage=1 00:33:12.110 --rc genhtml_function_coverage=1 00:33:12.110 --rc genhtml_legend=1 00:33:12.110 --rc geninfo_all_blocks=1 00:33:12.110 --rc geninfo_unexecuted_blocks=1 00:33:12.110 00:33:12.110 ' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:12.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.110 --rc genhtml_branch_coverage=1 00:33:12.110 --rc genhtml_function_coverage=1 00:33:12.110 --rc genhtml_legend=1 00:33:12.110 --rc geninfo_all_blocks=1 00:33:12.110 --rc geninfo_unexecuted_blocks=1 00:33:12.110 00:33:12.110 ' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:12.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.110 --rc genhtml_branch_coverage=1 00:33:12.110 --rc genhtml_function_coverage=1 00:33:12.110 --rc genhtml_legend=1 00:33:12.110 --rc geninfo_all_blocks=1 00:33:12.110 --rc geninfo_unexecuted_blocks=1 00:33:12.110 00:33:12.110 ' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:12.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.110 --rc genhtml_branch_coverage=1 00:33:12.110 --rc genhtml_function_coverage=1 00:33:12.110 --rc genhtml_legend=1 00:33:12.110 --rc geninfo_all_blocks=1 00:33:12.110 --rc geninfo_unexecuted_blocks=1 00:33:12.110 00:33:12.110 ' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.110 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:20.246 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:20.246 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:20.246 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:20.246 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.246 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:33:20.247 00:33:20.247 --- 10.0.0.2 ping statistics --- 00:33:20.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.247 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:33:20.247 00:33:20.247 --- 10.0.0.1 ping statistics --- 00:33:20.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.247 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1744090 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1744090 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1744090 ']' 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.247 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 [2024-12-06 17:49:11.600462] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:20.247 [2024-12-06 17:49:11.601622] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:33:20.247 [2024-12-06 17:49:11.601689] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.247 [2024-12-06 17:49:11.698977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:20.247 [2024-12-06 17:49:11.750411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.247 [2024-12-06 17:49:11.750464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.247 [2024-12-06 17:49:11.750472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.247 [2024-12-06 17:49:11.750480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.247 [2024-12-06 17:49:11.750486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.247 [2024-12-06 17:49:11.752084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.247 [2024-12-06 17:49:11.752087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.247 [2024-12-06 17:49:11.829988] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:20.247 [2024-12-06 17:49:11.830670] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:20.247 [2024-12-06 17:49:11.830913] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.509 [2024-12-06 17:49:12.461131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.509 [2024-12-06 17:49:12.493497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.509 NULL1 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.509 Delay0 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.509 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:20.510 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.510 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1744125 00:33:20.510 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:20.510 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:20.771 [2024-12-06 17:49:12.617546] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:22.683 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:22.683 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.683 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 [2024-12-06 17:49:14.838556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1680 is same with the state(6) to be set 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 Write completed with error (sct=0, sc=8) 00:33:22.944 Read completed with error (sct=0, sc=8) 00:33:22.944 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 Write completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 Read completed with error (sct=0, sc=8) 00:33:22.945 starting I/O failed: -6 00:33:22.945 [2024-12-06 17:49:14.842328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c000c40 is same with the state(6) to be set 00:33:22.945 starting I/O failed: -6 00:33:22.945 starting I/O failed: -6 00:33:23.899 [2024-12-06 17:49:15.801851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d29b0 is same with the state(6) to be set 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 [2024-12-06 17:49:15.842727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1860 is same with the state(6) to be set 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 [2024-12-06 17:49:15.842889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d14a0 is same with the state(6) to be set 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 [2024-12-06 17:49:15.843992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c00d020 is same with the state(6) to be set 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Read completed with error (sct=0, sc=8) 00:33:23.899 Write completed with error (sct=0, sc=8) 00:33:23.900 Read completed with error (sct=0, sc=8) 00:33:23.900 Read completed with error (sct=0, sc=8) 00:33:23.900 Write completed with error (sct=0, sc=8) 00:33:23.900 [2024-12-06 17:49:15.844110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa19c00d7c0 is same with the state(6) to be set 00:33:23.900 Initializing NVMe Controllers 00:33:23.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:23.900 Controller IO queue size 128, less than required. 00:33:23.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:23.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:23.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:23.900 Initialization complete. Launching workers. 00:33:23.900 ======================================================== 00:33:23.900 Latency(us) 00:33:23.900 Device Information : IOPS MiB/s Average min max 00:33:23.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.61 0.08 893329.09 373.48 1009024.33 00:33:23.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.09 0.09 922379.36 422.43 1011873.15 00:33:23.900 ======================================================== 00:33:23.900 Total : 345.70 0.17 908042.32 373.48 1011873.15 00:33:23.900 00:33:23.900 [2024-12-06 17:49:15.844608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d29b0 (9): Bad file descriptor 00:33:23.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:23.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:23.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1744125 00:33:23.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1744125 00:33:24.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1744125) - No such process 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1744125 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1744125 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1744125 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:24.472 [2024-12-06 17:49:16.377401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1744166 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:24.472 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:24.472 [2024-12-06 17:49:16.477477] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:25.043 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:25.043 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:25.043 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:25.614 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:25.614 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:25.614 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:25.874 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:25.874 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:25.874 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:26.444 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:26.444 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:26.444 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:27.034 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:27.034 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:27.034 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:27.605 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:27.605 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:27.605 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:27.605 Initializing NVMe Controllers 00:33:27.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:27.605 Controller IO queue size 128, less than required. 00:33:27.605 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:27.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:27.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:27.605 Initialization complete. Launching workers. 00:33:27.605 ======================================================== 00:33:27.605 Latency(us) 00:33:27.605 Device Information : IOPS MiB/s Average min max 00:33:27.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002190.16 1000300.07 1005741.01 00:33:27.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003612.35 1000417.15 1009303.46 00:33:27.605 ======================================================== 00:33:27.605 Total : 256.00 0.12 1002901.25 1000300.07 1009303.46 00:33:27.605 00:33:27.865 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1744166 00:33:28.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1744166) - No such process 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1744166 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:28.125 rmmod nvme_tcp 00:33:28.125 rmmod nvme_fabrics 00:33:28.125 rmmod nvme_keyring 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1744090 ']' 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1744090 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1744090 ']' 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1744090 00:33:28.125 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744090 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744090' 00:33:28.125 killing process with pid 1744090 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1744090 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1744090 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:28.125 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:28.126 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:28.126 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.126 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.126 17:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:30.670 00:33:30.670 real 0m18.334s 00:33:30.670 user 0m26.630s 00:33:30.670 sys 0m7.594s 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:30.670 ************************************ 00:33:30.670 END TEST nvmf_delete_subsystem 00:33:30.670 ************************************ 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.670 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:30.670 ************************************ 00:33:30.670 START TEST nvmf_host_management 00:33:30.670 ************************************ 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:30.671 * Looking for test storage... 00:33:30.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:30.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.671 --rc genhtml_branch_coverage=1 00:33:30.671 --rc genhtml_function_coverage=1 00:33:30.671 --rc genhtml_legend=1 00:33:30.671 --rc geninfo_all_blocks=1 00:33:30.671 --rc geninfo_unexecuted_blocks=1 00:33:30.671 00:33:30.671 ' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:30.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.671 --rc genhtml_branch_coverage=1 00:33:30.671 --rc genhtml_function_coverage=1 00:33:30.671 --rc genhtml_legend=1 00:33:30.671 --rc geninfo_all_blocks=1 00:33:30.671 --rc geninfo_unexecuted_blocks=1 00:33:30.671 00:33:30.671 ' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:30.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.671 --rc genhtml_branch_coverage=1 00:33:30.671 --rc genhtml_function_coverage=1 00:33:30.671 --rc genhtml_legend=1 00:33:30.671 --rc geninfo_all_blocks=1 00:33:30.671 --rc geninfo_unexecuted_blocks=1 00:33:30.671 00:33:30.671 ' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:30.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.671 --rc genhtml_branch_coverage=1 00:33:30.671 --rc genhtml_function_coverage=1 00:33:30.671 --rc genhtml_legend=1 00:33:30.671 --rc geninfo_all_blocks=1 00:33:30.671 --rc geninfo_unexecuted_blocks=1 00:33:30.671 00:33:30.671 ' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.671 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:30.672 17:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.816 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:38.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:38.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:38.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:38.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.817 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:33:38.818 00:33:38.818 --- 10.0.0.2 ping statistics --- 00:33:38.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.818 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:33:38.818 00:33:38.818 --- 10.0.0.1 ping statistics --- 00:33:38.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.818 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1746672 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1746672 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1746672 ']' 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.818 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.818 [2024-12-06 17:49:29.946953] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:38.818 [2024-12-06 17:49:29.948099] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:33:38.818 [2024-12-06 17:49:29.948151] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.818 [2024-12-06 17:49:30.045969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:38.818 [2024-12-06 17:49:30.101649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.818 [2024-12-06 17:49:30.101711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.818 [2024-12-06 17:49:30.101720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.818 [2024-12-06 17:49:30.101727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.818 [2024-12-06 17:49:30.101734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.818 [2024-12-06 17:49:30.103712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.818 [2024-12-06 17:49:30.103860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.818 [2024-12-06 17:49:30.104018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.818 [2024-12-06 17:49:30.104019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:38.818 [2024-12-06 17:49:30.186061] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:38.818 [2024-12-06 17:49:30.186840] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:38.818 [2024-12-06 17:49:30.187342] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:38.818 [2024-12-06 17:49:30.187773] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:38.818 [2024-12-06 17:49:30.187826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:38.818 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.818 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:38.818 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.818 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.819 [2024-12-06 17:49:30.813135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.819 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 Malloc0 00:33:39.082 [2024-12-06 17:49:30.921418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1746729 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1746729 /var/tmp/bdevperf.sock 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1746729 ']' 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:39.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.082 { 00:33:39.082 "params": { 00:33:39.082 "name": "Nvme$subsystem", 00:33:39.082 "trtype": "$TEST_TRANSPORT", 00:33:39.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.082 "adrfam": "ipv4", 00:33:39.082 "trsvcid": "$NVMF_PORT", 00:33:39.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.082 "hdgst": ${hdgst:-false}, 00:33:39.082 "ddgst": ${ddgst:-false} 00:33:39.082 }, 00:33:39.082 "method": "bdev_nvme_attach_controller" 00:33:39.082 } 00:33:39.082 EOF 00:33:39.082 )") 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:39.082 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.082 "params": { 00:33:39.082 "name": "Nvme0", 00:33:39.082 "trtype": "tcp", 00:33:39.082 "traddr": "10.0.0.2", 00:33:39.082 "adrfam": "ipv4", 00:33:39.082 "trsvcid": "4420", 00:33:39.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.082 "hdgst": false, 00:33:39.082 "ddgst": false 00:33:39.082 }, 00:33:39.082 "method": "bdev_nvme_attach_controller" 00:33:39.082 }' 00:33:39.082 [2024-12-06 17:49:31.031946] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:33:39.082 [2024-12-06 17:49:31.032018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746729 ] 00:33:39.082 [2024-12-06 17:49:31.126687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.344 [2024-12-06 17:49:31.180187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.344 Running I/O for 10 seconds... 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.916 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.916 [2024-12-06 17:49:31.921021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e91e20 is same with the state(6) to be set 00:33:39.916 [2024-12-06 17:49:31.921227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.916 [2024-12-06 17:49:31.921581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.916 [2024-12-06 17:49:31.921590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.921985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.921992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.917 [2024-12-06 17:49:31.922307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.917 [2024-12-06 17:49:31.922316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.922449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.918 [2024-12-06 17:49:31.922456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.923779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:39.918 task offset: 96128 on job bdev=Nvme0n1 fails 00:33:39.918 00:33:39.918 Latency(us) 00:33:39.918 [2024-12-06T16:49:31.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.918 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.918 Job: Nvme0n1 ended in about 0.53 seconds with error 00:33:39.918 Verification LBA range: start 0x0 length 0x400 00:33:39.918 Nvme0n1 : 0.53 1317.01 82.31 119.73 0.00 43443.18 1897.81 38010.88 00:33:39.918 [2024-12-06T16:49:31.984Z] =================================================================================================================== 00:33:39.918 [2024-12-06T16:49:31.984Z] Total : 1317.01 82.31 119.73 0.00 43443.18 1897.81 38010.88 00:33:39.918 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.918 [2024-12-06 17:49:31.926031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:39.918 [2024-12-06 17:49:31.926074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1436c20 (9): Bad file descriptor 00:33:39.918 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:39.918 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.918 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:39.918 [2024-12-06 17:49:31.927707] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:39.918 [2024-12-06 17:49:31.927803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:39.918 [2024-12-06 17:49:31.927849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.918 [2024-12-06 17:49:31.927867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:39.918 [2024-12-06 17:49:31.927876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:39.918 [2024-12-06 17:49:31.927884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:39.918 [2024-12-06 17:49:31.927893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1436c20 00:33:39.918 [2024-12-06 17:49:31.927921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1436c20 (9): Bad file descriptor 00:33:39.918 [2024-12-06 17:49:31.927965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:39.918 [2024-12-06 17:49:31.927977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:39.918 [2024-12-06 17:49:31.927989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:39.918 [2024-12-06 17:49:31.927999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:39.918 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.918 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1746729 00:33:41.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1746729) - No such process 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:41.303 { 00:33:41.303 "params": { 00:33:41.303 "name": "Nvme$subsystem", 00:33:41.303 "trtype": "$TEST_TRANSPORT", 00:33:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.303 "adrfam": "ipv4", 00:33:41.303 "trsvcid": "$NVMF_PORT", 00:33:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.303 "hdgst": ${hdgst:-false}, 00:33:41.303 "ddgst": ${ddgst:-false} 00:33:41.303 }, 00:33:41.303 "method": "bdev_nvme_attach_controller" 00:33:41.303 } 00:33:41.303 EOF 00:33:41.303 )") 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:41.303 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:41.303 "params": { 00:33:41.303 "name": "Nvme0", 00:33:41.304 "trtype": "tcp", 00:33:41.304 "traddr": "10.0.0.2", 00:33:41.304 "adrfam": "ipv4", 00:33:41.304 "trsvcid": "4420", 00:33:41.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:41.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:41.304 "hdgst": false, 00:33:41.304 "ddgst": false 00:33:41.304 }, 00:33:41.304 "method": "bdev_nvme_attach_controller" 00:33:41.304 }' 00:33:41.304 [2024-12-06 17:49:32.998335] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:33:41.304 [2024-12-06 17:49:32.998391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746766 ] 00:33:41.304 [2024-12-06 17:49:33.086651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.304 [2024-12-06 17:49:33.121485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.590 Running I/O for 1 seconds... 00:33:42.634 1533.00 IOPS, 95.81 MiB/s 00:33:42.634 Latency(us) 00:33:42.634 [2024-12-06T16:49:34.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.634 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:42.634 Verification LBA range: start 0x0 length 0x400 00:33:42.634 Nvme0n1 : 1.04 1540.73 96.30 0.00 0.00 40840.88 5488.64 37137.07 00:33:42.634 [2024-12-06T16:49:34.700Z] =================================================================================================================== 00:33:42.634 [2024-12-06T16:49:34.700Z] Total : 1540.73 96.30 0.00 0.00 40840.88 5488.64 37137.07 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.634 rmmod nvme_tcp 00:33:42.634 rmmod nvme_fabrics 00:33:42.634 rmmod nvme_keyring 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1746672 ']' 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1746672 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1746672 ']' 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1746672 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:42.634 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1746672 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1746672' 00:33:42.896 killing process with pid 1746672 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1746672 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1746672 00:33:42.896 [2024-12-06 17:49:34.826881] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.896 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:45.440 00:33:45.440 real 0m14.597s 00:33:45.440 user 0m19.602s 00:33:45.440 sys 0m7.533s 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:45.440 ************************************ 00:33:45.440 END TEST nvmf_host_management 00:33:45.440 ************************************ 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.440 17:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:45.440 ************************************ 00:33:45.440 START TEST nvmf_lvol 00:33:45.440 ************************************ 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:45.440 * Looking for test storage... 00:33:45.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.440 --rc genhtml_branch_coverage=1 00:33:45.440 --rc genhtml_function_coverage=1 00:33:45.440 --rc genhtml_legend=1 00:33:45.440 --rc geninfo_all_blocks=1 00:33:45.440 --rc geninfo_unexecuted_blocks=1 00:33:45.440 00:33:45.440 ' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.440 --rc genhtml_branch_coverage=1 00:33:45.440 --rc genhtml_function_coverage=1 00:33:45.440 --rc genhtml_legend=1 00:33:45.440 --rc geninfo_all_blocks=1 00:33:45.440 --rc geninfo_unexecuted_blocks=1 00:33:45.440 00:33:45.440 ' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.440 --rc genhtml_branch_coverage=1 00:33:45.440 --rc genhtml_function_coverage=1 00:33:45.440 --rc genhtml_legend=1 00:33:45.440 --rc geninfo_all_blocks=1 00:33:45.440 --rc geninfo_unexecuted_blocks=1 00:33:45.440 00:33:45.440 ' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:45.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.440 --rc genhtml_branch_coverage=1 00:33:45.440 --rc genhtml_function_coverage=1 00:33:45.440 --rc genhtml_legend=1 00:33:45.440 --rc geninfo_all_blocks=1 00:33:45.440 --rc geninfo_unexecuted_blocks=1 00:33:45.440 00:33:45.440 ' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.440 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.441 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:53.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:53.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:53.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:53.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.579 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:33:53.580 00:33:53.580 --- 10.0.0.2 ping statistics --- 00:33:53.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.580 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:33:53.580 00:33:53.580 --- 10.0.0.1 ping statistics --- 00:33:53.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.580 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1749242 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1749242 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1749242 ']' 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.580 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:53.580 [2024-12-06 17:49:44.572302] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:53.580 [2024-12-06 17:49:44.573425] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:33:53.580 [2024-12-06 17:49:44.573473] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.580 [2024-12-06 17:49:44.675001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:53.580 [2024-12-06 17:49:44.726965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.580 [2024-12-06 17:49:44.727020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.580 [2024-12-06 17:49:44.727028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.580 [2024-12-06 17:49:44.727036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.580 [2024-12-06 17:49:44.727042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.580 [2024-12-06 17:49:44.729107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.580 [2024-12-06 17:49:44.729263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.580 [2024-12-06 17:49:44.729264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.580 [2024-12-06 17:49:44.810210] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:53.580 [2024-12-06 17:49:44.811266] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:53.580 [2024-12-06 17:49:44.811613] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:53.580 [2024-12-06 17:49:44.811790] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:53.580 [2024-12-06 17:49:45.582202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.580 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:53.840 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:53.840 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:54.102 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:54.102 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:54.363 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:54.623 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5b56b52a-8afb-4e8d-87bb-081bf2084ca6 00:33:54.623 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5b56b52a-8afb-4e8d-87bb-081bf2084ca6 lvol 20 00:33:54.623 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cde8e72c-f512-41b4-b6a9-ce06cba6853a 00:33:54.623 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:54.885 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cde8e72c-f512-41b4-b6a9-ce06cba6853a 00:33:55.146 17:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:55.146 [2024-12-06 17:49:47.158077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.146 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:55.407 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1749307 00:33:55.407 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:55.407 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:56.350 17:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cde8e72c-f512-41b4-b6a9-ce06cba6853a MY_SNAPSHOT 00:33:56.611 17:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=77f52690-a270-4c8f-bfe6-014346a37cb1 00:33:56.612 17:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cde8e72c-f512-41b4-b6a9-ce06cba6853a 30 00:33:56.873 17:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 77f52690-a270-4c8f-bfe6-014346a37cb1 MY_CLONE 00:33:57.134 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fd93d74c-7d84-4c33-8d56-6b8f51fb97d2 00:33:57.134 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fd93d74c-7d84-4c33-8d56-6b8f51fb97d2 00:33:57.707 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1749307 00:34:05.844 Initializing NVMe Controllers 00:34:05.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:05.844 Controller IO queue size 128, less than required. 00:34:05.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:05.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:05.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:05.844 Initialization complete. Launching workers. 00:34:05.844 ======================================================== 00:34:05.844 Latency(us) 00:34:05.844 Device Information : IOPS MiB/s Average min max 00:34:05.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15283.70 59.70 8377.38 1894.49 63520.07 00:34:05.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15929.50 62.22 8037.24 3235.53 61543.17 00:34:05.844 ======================================================== 00:34:05.844 Total : 31213.20 121.93 8203.79 1894.49 63520.07 00:34:05.844 00:34:05.844 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:06.105 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cde8e72c-f512-41b4-b6a9-ce06cba6853a 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b56b52a-8afb-4e8d-87bb-081bf2084ca6 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:06.365 rmmod nvme_tcp 00:34:06.365 rmmod nvme_fabrics 00:34:06.365 rmmod nvme_keyring 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1749242 ']' 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1749242 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1749242 ']' 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1749242 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.365 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749242 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749242' 00:34:06.624 killing process with pid 1749242 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1749242 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1749242 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.624 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.164 00:34:09.164 real 0m23.677s 00:34:09.164 user 0m56.138s 00:34:09.164 sys 0m10.716s 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:09.164 ************************************ 00:34:09.164 END TEST nvmf_lvol 00:34:09.164 ************************************ 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.164 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:09.164 ************************************ 00:34:09.164 START TEST nvmf_lvs_grow 00:34:09.164 ************************************ 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:09.165 * Looking for test storage... 00:34:09.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.165 --rc genhtml_branch_coverage=1 00:34:09.165 --rc genhtml_function_coverage=1 00:34:09.165 --rc genhtml_legend=1 00:34:09.165 --rc geninfo_all_blocks=1 00:34:09.165 --rc geninfo_unexecuted_blocks=1 00:34:09.165 00:34:09.165 ' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.165 --rc genhtml_branch_coverage=1 00:34:09.165 --rc genhtml_function_coverage=1 00:34:09.165 --rc genhtml_legend=1 00:34:09.165 --rc geninfo_all_blocks=1 00:34:09.165 --rc geninfo_unexecuted_blocks=1 00:34:09.165 00:34:09.165 ' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.165 --rc genhtml_branch_coverage=1 00:34:09.165 --rc genhtml_function_coverage=1 00:34:09.165 --rc genhtml_legend=1 00:34:09.165 --rc geninfo_all_blocks=1 00:34:09.165 --rc geninfo_unexecuted_blocks=1 00:34:09.165 00:34:09.165 ' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.165 --rc genhtml_branch_coverage=1 00:34:09.165 --rc genhtml_function_coverage=1 00:34:09.165 --rc genhtml_legend=1 00:34:09.165 --rc geninfo_all_blocks=1 00:34:09.165 --rc geninfo_unexecuted_blocks=1 00:34:09.165 00:34:09.165 ' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.165 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.165 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.166 17:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:17.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:17.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:17.310 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:17.310 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.310 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:34:17.311 00:34:17.311 --- 10.0.0.2 ping statistics --- 00:34:17.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.311 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:34:17.311 00:34:17.311 --- 10.0.0.1 ping statistics --- 00:34:17.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.311 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1751883 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1751883 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1751883 ']' 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.311 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:17.311 [2024-12-06 17:50:08.429971] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:17.311 [2024-12-06 17:50:08.431124] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:34:17.311 [2024-12-06 17:50:08.431177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.311 [2024-12-06 17:50:08.530179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.311 [2024-12-06 17:50:08.580753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.311 [2024-12-06 17:50:08.580804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.311 [2024-12-06 17:50:08.580813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.311 [2024-12-06 17:50:08.580820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.311 [2024-12-06 17:50:08.580826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.311 [2024-12-06 17:50:08.581556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.311 [2024-12-06 17:50:08.658874] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:17.311 [2024-12-06 17:50:08.659150] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.311 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:17.573 [2024-12-06 17:50:09.450415] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.573 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:17.573 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:17.573 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.573 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:17.573 ************************************ 00:34:17.573 START TEST lvs_grow_clean 00:34:17.574 ************************************ 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:17.574 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:17.835 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:17.835 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:18.096 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:18.096 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:18.096 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:18.096 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:18.096 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:18.096 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08cdc363-c6fc-45cc-8ee4-da885595336b lvol 150 00:34:18.357 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d38172d-1334-4eee-91d5-9c0a917fb694 00:34:18.357 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:18.357 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:18.619 [2024-12-06 17:50:10.478139] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:18.619 [2024-12-06 17:50:10.478320] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:18.619 true 00:34:18.619 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:18.619 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:18.881 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:18.881 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:18.881 17:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d38172d-1334-4eee-91d5-9c0a917fb694 00:34:19.143 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:19.404 [2024-12-06 17:50:11.222775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1751963 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1751963 /var/tmp/bdevperf.sock 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1751963 ']' 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:19.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.404 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:19.664 [2024-12-06 17:50:11.481407] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:34:19.664 [2024-12-06 17:50:11.481478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751963 ] 00:34:19.664 [2024-12-06 17:50:11.574154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.664 [2024-12-06 17:50:11.626088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.235 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.235 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:20.235 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:20.495 Nvme0n1 00:34:20.495 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:20.756 [ 00:34:20.756 { 00:34:20.756 "name": "Nvme0n1", 00:34:20.756 "aliases": [ 00:34:20.756 "8d38172d-1334-4eee-91d5-9c0a917fb694" 00:34:20.756 ], 00:34:20.756 "product_name": "NVMe disk", 00:34:20.756 "block_size": 4096, 00:34:20.756 "num_blocks": 38912, 00:34:20.756 "uuid": "8d38172d-1334-4eee-91d5-9c0a917fb694", 00:34:20.756 "numa_id": 0, 00:34:20.756 "assigned_rate_limits": { 00:34:20.756 "rw_ios_per_sec": 0, 00:34:20.756 "rw_mbytes_per_sec": 0, 00:34:20.756 "r_mbytes_per_sec": 0, 00:34:20.756 "w_mbytes_per_sec": 0 00:34:20.756 }, 00:34:20.756 "claimed": false, 00:34:20.756 "zoned": false, 00:34:20.756 "supported_io_types": { 00:34:20.756 "read": true, 00:34:20.756 "write": true, 00:34:20.756 "unmap": true, 00:34:20.756 "flush": true, 00:34:20.756 "reset": true, 00:34:20.756 "nvme_admin": true, 00:34:20.756 "nvme_io": true, 00:34:20.756 "nvme_io_md": false, 00:34:20.756 "write_zeroes": true, 00:34:20.756 "zcopy": false, 00:34:20.756 "get_zone_info": false, 00:34:20.756 "zone_management": false, 00:34:20.756 "zone_append": false, 00:34:20.756 "compare": true, 00:34:20.756 "compare_and_write": true, 00:34:20.756 "abort": true, 00:34:20.756 "seek_hole": false, 00:34:20.756 "seek_data": false, 00:34:20.756 "copy": true, 00:34:20.756 "nvme_iov_md": false 00:34:20.756 }, 00:34:20.756 "memory_domains": [ 00:34:20.756 { 00:34:20.756 "dma_device_id": "system", 00:34:20.756 "dma_device_type": 1 00:34:20.756 } 00:34:20.756 ], 00:34:20.756 "driver_specific": { 00:34:20.756 "nvme": [ 00:34:20.756 { 00:34:20.756 "trid": { 00:34:20.756 "trtype": "TCP", 00:34:20.756 "adrfam": "IPv4", 00:34:20.756 "traddr": "10.0.0.2", 00:34:20.756 "trsvcid": "4420", 00:34:20.756 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:20.756 }, 00:34:20.756 "ctrlr_data": { 00:34:20.756 "cntlid": 1, 00:34:20.756 "vendor_id": "0x8086", 00:34:20.756 "model_number": "SPDK bdev Controller", 00:34:20.756 "serial_number": "SPDK0", 00:34:20.756 "firmware_revision": "25.01", 00:34:20.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.756 "oacs": { 00:34:20.756 "security": 0, 00:34:20.756 "format": 0, 00:34:20.756 "firmware": 0, 00:34:20.756 "ns_manage": 0 00:34:20.756 }, 00:34:20.756 "multi_ctrlr": true, 00:34:20.756 "ana_reporting": false 00:34:20.756 }, 00:34:20.756 "vs": { 00:34:20.756 "nvme_version": "1.3" 00:34:20.756 }, 00:34:20.756 "ns_data": { 00:34:20.756 "id": 1, 00:34:20.756 "can_share": true 00:34:20.756 } 00:34:20.756 } 00:34:20.756 ], 00:34:20.756 "mp_policy": "active_passive" 00:34:20.757 } 00:34:20.757 } 00:34:20.757 ] 00:34:20.757 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1751980 00:34:20.757 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:20.757 17:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:20.757 Running I/O for 10 seconds... 00:34:22.143 Latency(us) 00:34:22.143 [2024-12-06T16:50:14.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.143 Nvme0n1 : 1.00 16901.00 66.02 0.00 0.00 0.00 0.00 0.00 00:34:22.143 [2024-12-06T16:50:14.209Z] =================================================================================================================== 00:34:22.143 [2024-12-06T16:50:14.210Z] Total : 16901.00 66.02 0.00 0.00 0.00 0.00 0.00 00:34:22.144 00:34:22.716 17:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:22.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.978 Nvme0n1 : 2.00 17150.00 66.99 0.00 0.00 0.00 0.00 0.00 00:34:22.978 [2024-12-06T16:50:15.044Z] =================================================================================================================== 00:34:22.978 [2024-12-06T16:50:15.044Z] Total : 17150.00 66.99 0.00 0.00 0.00 0.00 0.00 00:34:22.978 00:34:22.978 true 00:34:22.978 17:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:22.978 17:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:23.240 17:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:23.240 17:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:23.240 17:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1751980 00:34:23.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.812 Nvme0n1 : 3.00 17381.33 67.90 0.00 0.00 0.00 0.00 0.00 00:34:23.812 [2024-12-06T16:50:15.878Z] =================================================================================================================== 00:34:23.812 [2024-12-06T16:50:15.878Z] Total : 17381.33 67.90 0.00 0.00 0.00 0.00 0.00 00:34:23.812 00:34:24.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.754 Nvme0n1 : 4.00 17592.00 68.72 0.00 0.00 0.00 0.00 0.00 00:34:24.754 [2024-12-06T16:50:16.820Z] =================================================================================================================== 00:34:24.754 [2024-12-06T16:50:16.820Z] Total : 17592.00 68.72 0.00 0.00 0.00 0.00 0.00 00:34:24.754 00:34:26.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:26.134 Nvme0n1 : 5.00 18353.60 71.69 0.00 0.00 0.00 0.00 0.00 00:34:26.134 [2024-12-06T16:50:18.201Z] =================================================================================================================== 00:34:26.135 [2024-12-06T16:50:18.201Z] Total : 18353.60 71.69 0.00 0.00 0.00 0.00 0.00 00:34:26.135 00:34:27.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.074 Nvme0n1 : 6.00 19567.83 76.44 0.00 0.00 0.00 0.00 0.00 00:34:27.074 [2024-12-06T16:50:19.140Z] =================================================================================================================== 00:34:27.074 [2024-12-06T16:50:19.140Z] Total : 19567.83 76.44 0.00 0.00 0.00 0.00 0.00 00:34:27.074 00:34:28.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:28.010 Nvme0n1 : 7.00 20424.00 79.78 0.00 0.00 0.00 0.00 0.00 00:34:28.010 [2024-12-06T16:50:20.076Z] =================================================================================================================== 00:34:28.010 [2024-12-06T16:50:20.076Z] Total : 20424.00 79.78 0.00 0.00 0.00 0.00 0.00 00:34:28.010 00:34:28.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:28.950 Nvme0n1 : 8.00 21069.88 82.30 0.00 0.00 0.00 0.00 0.00 00:34:28.950 [2024-12-06T16:50:21.016Z] =================================================================================================================== 00:34:28.950 [2024-12-06T16:50:21.016Z] Total : 21069.88 82.30 0.00 0.00 0.00 0.00 0.00 00:34:28.950 00:34:29.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:29.892 Nvme0n1 : 9.00 21579.33 84.29 0.00 0.00 0.00 0.00 0.00 00:34:29.892 [2024-12-06T16:50:21.958Z] =================================================================================================================== 00:34:29.892 [2024-12-06T16:50:21.958Z] Total : 21579.33 84.29 0.00 0.00 0.00 0.00 0.00 00:34:29.892 00:34:30.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:30.833 Nvme0n1 : 10.00 21985.30 85.88 0.00 0.00 0.00 0.00 0.00 00:34:30.833 [2024-12-06T16:50:22.899Z] =================================================================================================================== 00:34:30.833 [2024-12-06T16:50:22.899Z] Total : 21985.30 85.88 0.00 0.00 0.00 0.00 0.00 00:34:30.833 00:34:30.833 00:34:30.833 Latency(us) 00:34:30.833 [2024-12-06T16:50:22.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:30.833 Nvme0n1 : 10.00 21990.08 85.90 0.00 0.00 5817.87 2880.85 28835.84 00:34:30.833 [2024-12-06T16:50:22.899Z] =================================================================================================================== 00:34:30.833 [2024-12-06T16:50:22.899Z] Total : 21990.08 85.90 0.00 0.00 5817.87 2880.85 28835.84 00:34:30.833 { 00:34:30.833 "results": [ 00:34:30.833 { 00:34:30.833 "job": "Nvme0n1", 00:34:30.833 "core_mask": "0x2", 00:34:30.833 "workload": "randwrite", 00:34:30.833 "status": "finished", 00:34:30.833 "queue_depth": 128, 00:34:30.833 "io_size": 4096, 00:34:30.833 "runtime": 10.003645, 00:34:30.833 "iops": 21990.08461415814, 00:34:30.833 "mibps": 85.89876802405523, 00:34:30.833 "io_failed": 0, 00:34:30.833 "io_timeout": 0, 00:34:30.833 "avg_latency_us": 5817.868763090146, 00:34:30.833 "min_latency_us": 2880.8533333333335, 00:34:30.833 "max_latency_us": 28835.84 00:34:30.833 } 00:34:30.833 ], 00:34:30.833 "core_count": 1 00:34:30.833 } 00:34:30.833 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1751963 00:34:30.833 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1751963 ']' 00:34:30.833 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1751963 00:34:30.833 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:30.833 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.833 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1751963 00:34:31.094 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:31.094 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:31.094 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1751963' 00:34:31.094 killing process with pid 1751963 00:34:31.094 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1751963 00:34:31.094 Received shutdown signal, test time was about 10.000000 seconds 00:34:31.094 00:34:31.094 Latency(us) 00:34:31.094 [2024-12-06T16:50:23.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.094 [2024-12-06T16:50:23.160Z] =================================================================================================================== 00:34:31.094 [2024-12-06T16:50:23.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:31.094 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1751963 00:34:31.094 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:31.355 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.355 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:31.355 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:31.615 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:31.615 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:31.615 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:31.876 [2024-12-06 17:50:23.710190] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:31.876 request: 00:34:31.876 { 00:34:31.876 "uuid": "08cdc363-c6fc-45cc-8ee4-da885595336b", 00:34:31.876 "method": "bdev_lvol_get_lvstores", 00:34:31.876 "req_id": 1 00:34:31.876 } 00:34:31.876 Got JSON-RPC error response 00:34:31.876 response: 00:34:31.876 { 00:34:31.876 "code": -19, 00:34:31.876 "message": "No such device" 00:34:31.876 } 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:31.876 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:32.137 aio_bdev 00:34:32.137 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d38172d-1334-4eee-91d5-9c0a917fb694 00:34:32.137 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8d38172d-1334-4eee-91d5-9c0a917fb694 00:34:32.137 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:32.137 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:32.137 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:32.137 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:32.138 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:32.398 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d38172d-1334-4eee-91d5-9c0a917fb694 -t 2000 00:34:32.398 [ 00:34:32.398 { 00:34:32.398 "name": "8d38172d-1334-4eee-91d5-9c0a917fb694", 00:34:32.398 "aliases": [ 00:34:32.398 "lvs/lvol" 00:34:32.398 ], 00:34:32.398 "product_name": "Logical Volume", 00:34:32.398 "block_size": 4096, 00:34:32.398 "num_blocks": 38912, 00:34:32.398 "uuid": "8d38172d-1334-4eee-91d5-9c0a917fb694", 00:34:32.398 "assigned_rate_limits": { 00:34:32.398 "rw_ios_per_sec": 0, 00:34:32.398 "rw_mbytes_per_sec": 0, 00:34:32.398 "r_mbytes_per_sec": 0, 00:34:32.398 "w_mbytes_per_sec": 0 00:34:32.398 }, 00:34:32.398 "claimed": false, 00:34:32.398 "zoned": false, 00:34:32.398 "supported_io_types": { 00:34:32.398 "read": true, 00:34:32.398 "write": true, 00:34:32.398 "unmap": true, 00:34:32.398 "flush": false, 00:34:32.398 "reset": true, 00:34:32.398 "nvme_admin": false, 00:34:32.398 "nvme_io": false, 00:34:32.398 "nvme_io_md": false, 00:34:32.398 "write_zeroes": true, 00:34:32.398 "zcopy": false, 00:34:32.398 "get_zone_info": false, 00:34:32.398 "zone_management": false, 00:34:32.398 "zone_append": false, 00:34:32.398 "compare": false, 00:34:32.398 "compare_and_write": false, 00:34:32.398 "abort": false, 00:34:32.398 "seek_hole": true, 00:34:32.398 "seek_data": true, 00:34:32.398 "copy": false, 00:34:32.398 "nvme_iov_md": false 00:34:32.398 }, 00:34:32.398 "driver_specific": { 00:34:32.398 "lvol": { 00:34:32.398 "lvol_store_uuid": "08cdc363-c6fc-45cc-8ee4-da885595336b", 00:34:32.398 "base_bdev": "aio_bdev", 00:34:32.398 "thin_provision": false, 00:34:32.398 "num_allocated_clusters": 38, 00:34:32.398 "snapshot": false, 00:34:32.398 "clone": false, 00:34:32.398 "esnap_clone": false 00:34:32.398 } 00:34:32.398 } 00:34:32.398 } 00:34:32.398 ] 00:34:32.659 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:32.659 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:32.659 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:32.659 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:32.659 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:32.659 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:32.919 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:32.919 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d38172d-1334-4eee-91d5-9c0a917fb694 00:34:33.180 17:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08cdc363-c6fc-45cc-8ee4-da885595336b 00:34:33.180 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:33.441 00:34:33.441 real 0m15.866s 00:34:33.441 user 0m15.467s 00:34:33.441 sys 0m1.523s 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:33.441 ************************************ 00:34:33.441 END TEST lvs_grow_clean 00:34:33.441 ************************************ 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:33.441 ************************************ 00:34:33.441 START TEST lvs_grow_dirty 00:34:33.441 ************************************ 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:33.441 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:33.701 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:33.701 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:33.961 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dbda9133-234e-4ba6-944b-f02c495f044f 00:34:33.961 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:33.961 17:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:34.221 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:34.221 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:34.221 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dbda9133-234e-4ba6-944b-f02c495f044f lvol 150 00:34:34.221 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:34.221 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:34.221 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:34.481 [2024-12-06 17:50:26.378082] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:34.481 [2024-12-06 17:50:26.378222] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:34.481 true 00:34:34.481 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:34.481 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:34.741 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:34.741 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:34.741 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:35.002 17:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.002 [2024-12-06 17:50:27.066603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1752214 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1752214 /var/tmp/bdevperf.sock 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1752214 ']' 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:35.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.262 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:35.262 [2024-12-06 17:50:27.283096] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:34:35.262 [2024-12-06 17:50:27.283160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752214 ] 00:34:35.523 [2024-12-06 17:50:27.370345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.523 [2024-12-06 17:50:27.399916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.523 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.523 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:35.523 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:35.783 Nvme0n1 00:34:35.783 17:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:36.045 [ 00:34:36.045 { 00:34:36.045 "name": "Nvme0n1", 00:34:36.045 "aliases": [ 00:34:36.045 "0a55aeff-2dd0-4619-83c0-faf3b9643ca6" 00:34:36.045 ], 00:34:36.045 "product_name": "NVMe disk", 00:34:36.045 "block_size": 4096, 00:34:36.045 "num_blocks": 38912, 00:34:36.045 "uuid": "0a55aeff-2dd0-4619-83c0-faf3b9643ca6", 00:34:36.045 "numa_id": 0, 00:34:36.045 "assigned_rate_limits": { 00:34:36.045 "rw_ios_per_sec": 0, 00:34:36.045 "rw_mbytes_per_sec": 0, 00:34:36.045 "r_mbytes_per_sec": 0, 00:34:36.045 "w_mbytes_per_sec": 0 00:34:36.045 }, 00:34:36.045 "claimed": false, 00:34:36.045 "zoned": false, 00:34:36.045 "supported_io_types": { 00:34:36.045 "read": true, 00:34:36.045 "write": true, 00:34:36.045 "unmap": true, 00:34:36.045 "flush": true, 00:34:36.045 "reset": true, 00:34:36.045 "nvme_admin": true, 00:34:36.045 "nvme_io": true, 00:34:36.045 "nvme_io_md": false, 00:34:36.045 "write_zeroes": true, 00:34:36.045 "zcopy": false, 00:34:36.045 "get_zone_info": false, 00:34:36.045 "zone_management": false, 00:34:36.045 "zone_append": false, 00:34:36.045 "compare": true, 00:34:36.045 "compare_and_write": true, 00:34:36.045 "abort": true, 00:34:36.045 "seek_hole": false, 00:34:36.045 "seek_data": false, 00:34:36.045 "copy": true, 00:34:36.045 "nvme_iov_md": false 00:34:36.045 }, 00:34:36.045 "memory_domains": [ 00:34:36.045 { 00:34:36.045 "dma_device_id": "system", 00:34:36.045 "dma_device_type": 1 00:34:36.045 } 00:34:36.045 ], 00:34:36.045 "driver_specific": { 00:34:36.045 "nvme": [ 00:34:36.045 { 00:34:36.045 "trid": { 00:34:36.045 "trtype": "TCP", 00:34:36.045 "adrfam": "IPv4", 00:34:36.045 "traddr": "10.0.0.2", 00:34:36.045 "trsvcid": "4420", 00:34:36.045 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:36.045 }, 00:34:36.045 "ctrlr_data": { 00:34:36.045 "cntlid": 1, 00:34:36.045 "vendor_id": "0x8086", 00:34:36.045 "model_number": "SPDK bdev Controller", 00:34:36.045 "serial_number": "SPDK0", 00:34:36.045 "firmware_revision": "25.01", 00:34:36.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.045 "oacs": { 00:34:36.045 "security": 0, 00:34:36.045 "format": 0, 00:34:36.045 "firmware": 0, 00:34:36.045 "ns_manage": 0 00:34:36.045 }, 00:34:36.045 "multi_ctrlr": true, 00:34:36.045 "ana_reporting": false 00:34:36.045 }, 00:34:36.045 "vs": { 00:34:36.045 "nvme_version": "1.3" 00:34:36.045 }, 00:34:36.045 "ns_data": { 00:34:36.045 "id": 1, 00:34:36.045 "can_share": true 00:34:36.045 } 00:34:36.045 } 00:34:36.045 ], 00:34:36.045 "mp_policy": "active_passive" 00:34:36.045 } 00:34:36.045 } 00:34:36.045 ] 00:34:36.045 17:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1752229 00:34:36.045 17:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:36.045 17:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:36.045 Running I/O for 10 seconds... 00:34:37.432 Latency(us) 00:34:37.432 [2024-12-06T16:50:29.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:37.432 Nvme0n1 : 1.00 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:34:37.432 [2024-12-06T16:50:29.498Z] =================================================================================================================== 00:34:37.432 [2024-12-06T16:50:29.498Z] Total : 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:34:37.432 00:34:38.004 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:38.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.266 Nvme0n1 : 2.00 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:34:38.266 [2024-12-06T16:50:30.332Z] =================================================================================================================== 00:34:38.266 [2024-12-06T16:50:30.332Z] Total : 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:34:38.266 00:34:38.266 true 00:34:38.266 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:38.266 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:38.528 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:38.528 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:38.528 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1752229 00:34:39.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:39.101 Nvme0n1 : 3.00 17952.67 70.13 0.00 0.00 0.00 0.00 0.00 00:34:39.101 [2024-12-06T16:50:31.167Z] =================================================================================================================== 00:34:39.101 [2024-12-06T16:50:31.167Z] Total : 17952.67 70.13 0.00 0.00 0.00 0.00 0.00 00:34:39.101 00:34:40.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.188 Nvme0n1 : 4.00 18004.75 70.33 0.00 0.00 0.00 0.00 0.00 00:34:40.188 [2024-12-06T16:50:32.254Z] =================================================================================================================== 00:34:40.188 [2024-12-06T16:50:32.254Z] Total : 18004.75 70.33 0.00 0.00 0.00 0.00 0.00 00:34:40.188 00:34:41.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:41.131 Nvme0n1 : 5.00 19331.40 75.51 0.00 0.00 0.00 0.00 0.00 00:34:41.131 [2024-12-06T16:50:33.197Z] =================================================================================================================== 00:34:41.131 [2024-12-06T16:50:33.197Z] Total : 19331.40 75.51 0.00 0.00 0.00 0.00 0.00 00:34:41.131 00:34:42.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:42.074 Nvme0n1 : 6.00 20374.83 79.59 0.00 0.00 0.00 0.00 0.00 00:34:42.075 [2024-12-06T16:50:34.141Z] =================================================================================================================== 00:34:42.075 [2024-12-06T16:50:34.141Z] Total : 20374.83 79.59 0.00 0.00 0.00 0.00 0.00 00:34:42.075 00:34:43.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:43.458 Nvme0n1 : 7.00 21129.00 82.54 0.00 0.00 0.00 0.00 0.00 00:34:43.458 [2024-12-06T16:50:35.524Z] =================================================================================================================== 00:34:43.458 [2024-12-06T16:50:35.524Z] Total : 21129.00 82.54 0.00 0.00 0.00 0.00 0.00 00:34:43.458 00:34:44.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:44.398 Nvme0n1 : 8.00 21694.62 84.74 0.00 0.00 0.00 0.00 0.00 00:34:44.398 [2024-12-06T16:50:36.464Z] =================================================================================================================== 00:34:44.398 [2024-12-06T16:50:36.464Z] Total : 21694.62 84.74 0.00 0.00 0.00 0.00 0.00 00:34:44.398 00:34:45.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.338 Nvme0n1 : 9.00 22134.67 86.46 0.00 0.00 0.00 0.00 0.00 00:34:45.338 [2024-12-06T16:50:37.404Z] =================================================================================================================== 00:34:45.338 [2024-12-06T16:50:37.404Z] Total : 22134.67 86.46 0.00 0.00 0.00 0.00 0.00 00:34:45.338 00:34:46.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:46.274 Nvme0n1 : 10.00 22486.60 87.84 0.00 0.00 0.00 0.00 0.00 00:34:46.274 [2024-12-06T16:50:38.340Z] =================================================================================================================== 00:34:46.274 [2024-12-06T16:50:38.340Z] Total : 22486.60 87.84 0.00 0.00 0.00 0.00 0.00 00:34:46.274 00:34:46.274 00:34:46.274 Latency(us) 00:34:46.274 [2024-12-06T16:50:38.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:46.274 Nvme0n1 : 10.00 22492.02 87.86 0.00 0.00 5688.53 3112.96 28180.48 00:34:46.274 [2024-12-06T16:50:38.340Z] =================================================================================================================== 00:34:46.274 [2024-12-06T16:50:38.340Z] Total : 22492.02 87.86 0.00 0.00 5688.53 3112.96 28180.48 00:34:46.274 { 00:34:46.274 "results": [ 00:34:46.274 { 00:34:46.274 "job": "Nvme0n1", 00:34:46.274 "core_mask": "0x2", 00:34:46.274 "workload": "randwrite", 00:34:46.274 "status": "finished", 00:34:46.274 "queue_depth": 128, 00:34:46.274 "io_size": 4096, 00:34:46.274 "runtime": 10.00328, 00:34:46.274 "iops": 22492.022616581762, 00:34:46.274 "mibps": 87.85946334602251, 00:34:46.274 "io_failed": 0, 00:34:46.274 "io_timeout": 0, 00:34:46.274 "avg_latency_us": 5688.527096248493, 00:34:46.274 "min_latency_us": 3112.96, 00:34:46.274 "max_latency_us": 28180.48 00:34:46.274 } 00:34:46.274 ], 00:34:46.274 "core_count": 1 00:34:46.274 } 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1752214 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1752214 ']' 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1752214 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752214 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752214' 00:34:46.274 killing process with pid 1752214 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1752214 00:34:46.274 Received shutdown signal, test time was about 10.000000 seconds 00:34:46.274 00:34:46.274 Latency(us) 00:34:46.274 [2024-12-06T16:50:38.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.274 [2024-12-06T16:50:38.340Z] =================================================================================================================== 00:34:46.274 [2024-12-06T16:50:38.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:46.274 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1752214 00:34:46.275 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:46.534 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:46.794 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:46.794 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:46.794 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:46.794 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:46.794 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1751883 00:34:46.794 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1751883 00:34:47.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1751883 Killed "${NVMF_APP[@]}" "$@" 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1752369 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1752369 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1752369 ']' 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.053 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:47.053 [2024-12-06 17:50:38.925322] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:47.053 [2024-12-06 17:50:38.926289] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:34:47.053 [2024-12-06 17:50:38.926329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.053 [2024-12-06 17:50:39.018440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.053 [2024-12-06 17:50:39.048344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.053 [2024-12-06 17:50:39.048373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.053 [2024-12-06 17:50:39.048378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.053 [2024-12-06 17:50:39.048384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.053 [2024-12-06 17:50:39.048388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.053 [2024-12-06 17:50:39.048824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.053 [2024-12-06 17:50:39.099815] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:47.053 [2024-12-06 17:50:39.100006] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:47.991 [2024-12-06 17:50:39.927165] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:47.991 [2024-12-06 17:50:39.927408] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:47.991 [2024-12-06 17:50:39.927500] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:47.991 17:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:48.252 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0a55aeff-2dd0-4619-83c0-faf3b9643ca6 -t 2000 00:34:48.252 [ 00:34:48.252 { 00:34:48.252 "name": "0a55aeff-2dd0-4619-83c0-faf3b9643ca6", 00:34:48.252 "aliases": [ 00:34:48.252 "lvs/lvol" 00:34:48.252 ], 00:34:48.252 "product_name": "Logical Volume", 00:34:48.252 "block_size": 4096, 00:34:48.252 "num_blocks": 38912, 00:34:48.252 "uuid": "0a55aeff-2dd0-4619-83c0-faf3b9643ca6", 00:34:48.252 "assigned_rate_limits": { 00:34:48.252 "rw_ios_per_sec": 0, 00:34:48.252 "rw_mbytes_per_sec": 0, 00:34:48.252 "r_mbytes_per_sec": 0, 00:34:48.252 "w_mbytes_per_sec": 0 00:34:48.252 }, 00:34:48.252 "claimed": false, 00:34:48.252 "zoned": false, 00:34:48.252 "supported_io_types": { 00:34:48.252 "read": true, 00:34:48.252 "write": true, 00:34:48.252 "unmap": true, 00:34:48.252 "flush": false, 00:34:48.252 "reset": true, 00:34:48.252 "nvme_admin": false, 00:34:48.252 "nvme_io": false, 00:34:48.252 "nvme_io_md": false, 00:34:48.252 "write_zeroes": true, 00:34:48.252 "zcopy": false, 00:34:48.252 "get_zone_info": false, 00:34:48.252 "zone_management": false, 00:34:48.252 "zone_append": false, 00:34:48.252 "compare": false, 00:34:48.252 "compare_and_write": false, 00:34:48.252 "abort": false, 00:34:48.252 "seek_hole": true, 00:34:48.252 "seek_data": true, 00:34:48.252 "copy": false, 00:34:48.252 "nvme_iov_md": false 00:34:48.252 }, 00:34:48.252 "driver_specific": { 00:34:48.252 "lvol": { 00:34:48.252 "lvol_store_uuid": "dbda9133-234e-4ba6-944b-f02c495f044f", 00:34:48.252 "base_bdev": "aio_bdev", 00:34:48.252 "thin_provision": false, 00:34:48.252 "num_allocated_clusters": 38, 00:34:48.252 "snapshot": false, 00:34:48.252 "clone": false, 00:34:48.252 "esnap_clone": false 00:34:48.252 } 00:34:48.252 } 00:34:48.252 } 00:34:48.252 ] 00:34:48.252 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:48.252 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:48.252 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:48.512 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:48.512 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:48.512 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:48.772 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:48.772 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:48.773 [2024-12-06 17:50:40.805311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:49.035 17:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:49.035 request: 00:34:49.035 { 00:34:49.035 "uuid": "dbda9133-234e-4ba6-944b-f02c495f044f", 00:34:49.035 "method": "bdev_lvol_get_lvstores", 00:34:49.035 "req_id": 1 00:34:49.035 } 00:34:49.035 Got JSON-RPC error response 00:34:49.035 response: 00:34:49.035 { 00:34:49.035 "code": -19, 00:34:49.035 "message": "No such device" 00:34:49.035 } 00:34:49.035 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:49.035 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.035 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.035 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.035 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:49.296 aio_bdev 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:49.296 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0a55aeff-2dd0-4619-83c0-faf3b9643ca6 -t 2000 00:34:49.557 [ 00:34:49.557 { 00:34:49.557 "name": "0a55aeff-2dd0-4619-83c0-faf3b9643ca6", 00:34:49.557 "aliases": [ 00:34:49.557 "lvs/lvol" 00:34:49.557 ], 00:34:49.557 "product_name": "Logical Volume", 00:34:49.557 "block_size": 4096, 00:34:49.557 "num_blocks": 38912, 00:34:49.557 "uuid": "0a55aeff-2dd0-4619-83c0-faf3b9643ca6", 00:34:49.557 "assigned_rate_limits": { 00:34:49.557 "rw_ios_per_sec": 0, 00:34:49.557 "rw_mbytes_per_sec": 0, 00:34:49.557 "r_mbytes_per_sec": 0, 00:34:49.557 "w_mbytes_per_sec": 0 00:34:49.557 }, 00:34:49.557 "claimed": false, 00:34:49.557 "zoned": false, 00:34:49.557 "supported_io_types": { 00:34:49.557 "read": true, 00:34:49.557 "write": true, 00:34:49.557 "unmap": true, 00:34:49.557 "flush": false, 00:34:49.557 "reset": true, 00:34:49.557 "nvme_admin": false, 00:34:49.557 "nvme_io": false, 00:34:49.557 "nvme_io_md": false, 00:34:49.557 "write_zeroes": true, 00:34:49.557 "zcopy": false, 00:34:49.557 "get_zone_info": false, 00:34:49.557 "zone_management": false, 00:34:49.557 "zone_append": false, 00:34:49.557 "compare": false, 00:34:49.557 "compare_and_write": false, 00:34:49.557 "abort": false, 00:34:49.557 "seek_hole": true, 00:34:49.557 "seek_data": true, 00:34:49.557 "copy": false, 00:34:49.557 "nvme_iov_md": false 00:34:49.557 }, 00:34:49.557 "driver_specific": { 00:34:49.557 "lvol": { 00:34:49.557 "lvol_store_uuid": "dbda9133-234e-4ba6-944b-f02c495f044f", 00:34:49.557 "base_bdev": "aio_bdev", 00:34:49.557 "thin_provision": false, 00:34:49.557 "num_allocated_clusters": 38, 00:34:49.557 "snapshot": false, 00:34:49.557 "clone": false, 00:34:49.557 "esnap_clone": false 00:34:49.557 } 00:34:49.557 } 00:34:49.557 } 00:34:49.557 ] 00:34:49.557 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:49.557 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:49.557 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:49.818 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:49.818 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:49.818 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:49.818 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:49.818 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a55aeff-2dd0-4619-83c0-faf3b9643ca6 00:34:50.078 17:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dbda9133-234e-4ba6-944b-f02c495f044f 00:34:50.339 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:50.339 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:50.339 00:34:50.339 real 0m16.934s 00:34:50.339 user 0m34.897s 00:34:50.339 sys 0m2.915s 00:34:50.339 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.339 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:50.339 ************************************ 00:34:50.339 END TEST lvs_grow_dirty 00:34:50.339 ************************************ 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:50.601 nvmf_trace.0 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.601 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.602 rmmod nvme_tcp 00:34:50.602 rmmod nvme_fabrics 00:34:50.602 rmmod nvme_keyring 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1752369 ']' 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1752369 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1752369 ']' 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1752369 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752369 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752369' 00:34:50.602 killing process with pid 1752369 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1752369 00:34:50.602 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1752369 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.863 17:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.409 00:34:53.409 real 0m44.074s 00:34:53.409 user 0m53.302s 00:34:53.409 sys 0m10.523s 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:53.409 ************************************ 00:34:53.409 END TEST nvmf_lvs_grow 00:34:53.409 ************************************ 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.409 ************************************ 00:34:53.409 START TEST nvmf_bdev_io_wait 00:34:53.409 ************************************ 00:34:53.409 17:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:53.409 * Looking for test storage... 00:34:53.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.409 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.410 --rc genhtml_branch_coverage=1 00:34:53.410 --rc genhtml_function_coverage=1 00:34:53.410 --rc genhtml_legend=1 00:34:53.410 --rc geninfo_all_blocks=1 00:34:53.410 --rc geninfo_unexecuted_blocks=1 00:34:53.410 00:34:53.410 ' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.410 --rc genhtml_branch_coverage=1 00:34:53.410 --rc genhtml_function_coverage=1 00:34:53.410 --rc genhtml_legend=1 00:34:53.410 --rc geninfo_all_blocks=1 00:34:53.410 --rc geninfo_unexecuted_blocks=1 00:34:53.410 00:34:53.410 ' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.410 --rc genhtml_branch_coverage=1 00:34:53.410 --rc genhtml_function_coverage=1 00:34:53.410 --rc genhtml_legend=1 00:34:53.410 --rc geninfo_all_blocks=1 00:34:53.410 --rc geninfo_unexecuted_blocks=1 00:34:53.410 00:34:53.410 ' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.410 --rc genhtml_branch_coverage=1 00:34:53.410 --rc genhtml_function_coverage=1 00:34:53.410 --rc genhtml_legend=1 00:34:53.410 --rc geninfo_all_blocks=1 00:34:53.410 --rc geninfo_unexecuted_blocks=1 00:34:53.410 00:34:53.410 ' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.410 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.411 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.411 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.411 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.411 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.411 17:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.558 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.558 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.558 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.558 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.558 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.558 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:01.559 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:01.559 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:01.559 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:01.559 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.559 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:35:01.560 00:35:01.560 --- 10.0.0.2 ping statistics --- 00:35:01.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.560 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:35:01.560 00:35:01.560 --- 10.0.0.1 ping statistics --- 00:35:01.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.560 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1754915 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1754915 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1754915 ']' 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.560 17:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 [2024-12-06 17:50:52.595128] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.560 [2024-12-06 17:50:52.596206] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:01.560 [2024-12-06 17:50:52.596250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.560 [2024-12-06 17:50:52.698755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:01.560 [2024-12-06 17:50:52.752761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.560 [2024-12-06 17:50:52.752822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.560 [2024-12-06 17:50:52.752832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.560 [2024-12-06 17:50:52.752840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.560 [2024-12-06 17:50:52.752846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.560 [2024-12-06 17:50:52.755236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.560 [2024-12-06 17:50:52.755401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.560 [2024-12-06 17:50:52.755545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.560 [2024-12-06 17:50:52.755546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.560 [2024-12-06 17:50:52.755900] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 [2024-12-06 17:50:53.523465] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:01.560 [2024-12-06 17:50:53.524178] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:01.560 [2024-12-06 17:50:53.524263] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:01.560 [2024-12-06 17:50:53.524442] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 [2024-12-06 17:50:53.536105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 Malloc0 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.560 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:01.561 [2024-12-06 17:50:53.608669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1754955 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1754957 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.561 { 00:35:01.561 "params": { 00:35:01.561 "name": "Nvme$subsystem", 00:35:01.561 "trtype": "$TEST_TRANSPORT", 00:35:01.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.561 "adrfam": "ipv4", 00:35:01.561 "trsvcid": "$NVMF_PORT", 00:35:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.561 "hdgst": ${hdgst:-false}, 00:35:01.561 "ddgst": ${ddgst:-false} 00:35:01.561 }, 00:35:01.561 "method": "bdev_nvme_attach_controller" 00:35:01.561 } 00:35:01.561 EOF 00:35:01.561 )") 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1754959 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1754962 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.561 { 00:35:01.561 "params": { 00:35:01.561 "name": "Nvme$subsystem", 00:35:01.561 "trtype": "$TEST_TRANSPORT", 00:35:01.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.561 "adrfam": "ipv4", 00:35:01.561 "trsvcid": "$NVMF_PORT", 00:35:01.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.561 "hdgst": ${hdgst:-false}, 00:35:01.561 "ddgst": ${ddgst:-false} 00:35:01.561 }, 00:35:01.561 "method": "bdev_nvme_attach_controller" 00:35:01.561 } 00:35:01.561 EOF 00:35:01.561 )") 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.561 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.823 { 00:35:01.823 "params": { 00:35:01.823 "name": "Nvme$subsystem", 00:35:01.823 "trtype": "$TEST_TRANSPORT", 00:35:01.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.823 "adrfam": "ipv4", 00:35:01.823 "trsvcid": "$NVMF_PORT", 00:35:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.823 "hdgst": ${hdgst:-false}, 00:35:01.823 "ddgst": ${ddgst:-false} 00:35:01.823 }, 00:35:01.823 "method": "bdev_nvme_attach_controller" 00:35:01.823 } 00:35:01.823 EOF 00:35:01.823 )") 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.823 { 00:35:01.823 "params": { 00:35:01.823 "name": "Nvme$subsystem", 00:35:01.823 "trtype": "$TEST_TRANSPORT", 00:35:01.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.823 "adrfam": "ipv4", 00:35:01.823 "trsvcid": "$NVMF_PORT", 00:35:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.823 "hdgst": ${hdgst:-false}, 00:35:01.823 "ddgst": ${ddgst:-false} 00:35:01.823 }, 00:35:01.823 "method": "bdev_nvme_attach_controller" 00:35:01.823 } 00:35:01.823 EOF 00:35:01.823 )") 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1754955 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.823 "params": { 00:35:01.823 "name": "Nvme1", 00:35:01.823 "trtype": "tcp", 00:35:01.823 "traddr": "10.0.0.2", 00:35:01.823 "adrfam": "ipv4", 00:35:01.823 "trsvcid": "4420", 00:35:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.823 "hdgst": false, 00:35:01.823 "ddgst": false 00:35:01.823 }, 00:35:01.823 "method": "bdev_nvme_attach_controller" 00:35:01.823 }' 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.823 "params": { 00:35:01.823 "name": "Nvme1", 00:35:01.823 "trtype": "tcp", 00:35:01.823 "traddr": "10.0.0.2", 00:35:01.823 "adrfam": "ipv4", 00:35:01.823 "trsvcid": "4420", 00:35:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.823 "hdgst": false, 00:35:01.823 "ddgst": false 00:35:01.823 }, 00:35:01.823 "method": "bdev_nvme_attach_controller" 00:35:01.823 }' 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.823 "params": { 00:35:01.823 "name": "Nvme1", 00:35:01.823 "trtype": "tcp", 00:35:01.823 "traddr": "10.0.0.2", 00:35:01.823 "adrfam": "ipv4", 00:35:01.823 "trsvcid": "4420", 00:35:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.823 "hdgst": false, 00:35:01.823 "ddgst": false 00:35:01.823 }, 00:35:01.823 "method": "bdev_nvme_attach_controller" 00:35:01.823 }' 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:01.823 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.823 "params": { 00:35:01.823 "name": "Nvme1", 00:35:01.823 "trtype": "tcp", 00:35:01.823 "traddr": "10.0.0.2", 00:35:01.823 "adrfam": "ipv4", 00:35:01.823 "trsvcid": "4420", 00:35:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.823 "hdgst": false, 00:35:01.823 "ddgst": false 00:35:01.823 }, 00:35:01.823 "method": "bdev_nvme_attach_controller" 00:35:01.823 }' 00:35:01.823 [2024-12-06 17:50:53.671003] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:01.823 [2024-12-06 17:50:53.671078] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:01.823 [2024-12-06 17:50:53.672510] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:01.823 [2024-12-06 17:50:53.672583] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:01.823 [2024-12-06 17:50:53.677444] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:01.823 [2024-12-06 17:50:53.677507] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:01.824 [2024-12-06 17:50:53.678037] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:01.824 [2024-12-06 17:50:53.678098] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:02.084 [2024-12-06 17:50:53.896670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.084 [2024-12-06 17:50:53.936731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:02.084 [2024-12-06 17:50:53.986117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.084 [2024-12-06 17:50:54.023563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:02.084 [2024-12-06 17:50:54.050142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.084 [2024-12-06 17:50:54.086312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:02.084 [2024-12-06 17:50:54.114050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.345 [2024-12-06 17:50:54.152256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:02.345 Running I/O for 1 seconds... 00:35:02.345 Running I/O for 1 seconds... 00:35:02.345 Running I/O for 1 seconds... 00:35:02.345 Running I/O for 1 seconds... 00:35:03.292 7818.00 IOPS, 30.54 MiB/s 00:35:03.292 Latency(us) 00:35:03.292 [2024-12-06T16:50:55.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.292 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:03.292 Nvme1n1 : 1.02 7774.03 30.37 0.00 0.00 16281.39 4642.13 28617.39 00:35:03.292 [2024-12-06T16:50:55.358Z] =================================================================================================================== 00:35:03.292 [2024-12-06T16:50:55.358Z] Total : 7774.03 30.37 0.00 0.00 16281.39 4642.13 28617.39 00:35:03.292 181432.00 IOPS, 708.72 MiB/s [2024-12-06T16:50:55.358Z] 7257.00 IOPS, 28.35 MiB/s 00:35:03.292 Latency(us) 00:35:03.292 [2024-12-06T16:50:55.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.292 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:03.292 Nvme1n1 : 1.00 181068.99 707.30 0.00 0.00 702.71 298.67 1966.08 00:35:03.292 [2024-12-06T16:50:55.358Z] =================================================================================================================== 00:35:03.292 [2024-12-06T16:50:55.358Z] Total : 181068.99 707.30 0.00 0.00 702.71 298.67 1966.08 00:35:03.292 00:35:03.292 Latency(us) 00:35:03.292 [2024-12-06T16:50:55.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.292 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:03.293 Nvme1n1 : 1.01 7366.83 28.78 0.00 0.00 17317.70 5133.65 27634.35 00:35:03.293 [2024-12-06T16:50:55.359Z] =================================================================================================================== 00:35:03.293 [2024-12-06T16:50:55.359Z] Total : 7366.83 28.78 0.00 0.00 17317.70 5133.65 27634.35 00:35:03.293 11323.00 IOPS, 44.23 MiB/s 00:35:03.293 Latency(us) 00:35:03.293 [2024-12-06T16:50:55.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.293 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:03.293 Nvme1n1 : 1.01 11375.19 44.43 0.00 0.00 11211.52 4997.12 16820.91 00:35:03.293 [2024-12-06T16:50:55.359Z] =================================================================================================================== 00:35:03.293 [2024-12-06T16:50:55.359Z] Total : 11375.19 44.43 0.00 0.00 11211.52 4997.12 16820.91 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1754957 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1754959 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1754962 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:03.556 rmmod nvme_tcp 00:35:03.556 rmmod nvme_fabrics 00:35:03.556 rmmod nvme_keyring 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1754915 ']' 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1754915 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1754915 ']' 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1754915 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1754915 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1754915' 00:35:03.556 killing process with pid 1754915 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1754915 00:35:03.556 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1754915 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.818 17:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.363 00:35:06.363 real 0m12.890s 00:35:06.363 user 0m15.289s 00:35:06.363 sys 0m7.615s 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:06.363 ************************************ 00:35:06.363 END TEST nvmf_bdev_io_wait 00:35:06.363 ************************************ 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:06.363 ************************************ 00:35:06.363 START TEST nvmf_queue_depth 00:35:06.363 ************************************ 00:35:06.363 17:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:06.363 * Looking for test storage... 00:35:06.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:06.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.363 --rc genhtml_branch_coverage=1 00:35:06.363 --rc genhtml_function_coverage=1 00:35:06.363 --rc genhtml_legend=1 00:35:06.363 --rc geninfo_all_blocks=1 00:35:06.363 --rc geninfo_unexecuted_blocks=1 00:35:06.363 00:35:06.363 ' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:06.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.363 --rc genhtml_branch_coverage=1 00:35:06.363 --rc genhtml_function_coverage=1 00:35:06.363 --rc genhtml_legend=1 00:35:06.363 --rc geninfo_all_blocks=1 00:35:06.363 --rc geninfo_unexecuted_blocks=1 00:35:06.363 00:35:06.363 ' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:06.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.363 --rc genhtml_branch_coverage=1 00:35:06.363 --rc genhtml_function_coverage=1 00:35:06.363 --rc genhtml_legend=1 00:35:06.363 --rc geninfo_all_blocks=1 00:35:06.363 --rc geninfo_unexecuted_blocks=1 00:35:06.363 00:35:06.363 ' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:06.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.363 --rc genhtml_branch_coverage=1 00:35:06.363 --rc genhtml_function_coverage=1 00:35:06.363 --rc genhtml_legend=1 00:35:06.363 --rc geninfo_all_blocks=1 00:35:06.363 --rc geninfo_unexecuted_blocks=1 00:35:06.363 00:35:06.363 ' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.363 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.364 17:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:14.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:14.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:14.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:14.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:14.504 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:14.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:14.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:35:14.505 00:35:14.505 --- 10.0.0.2 ping statistics --- 00:35:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.505 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:14.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:14.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:35:14.505 00:35:14.505 --- 10.0.0.1 ping statistics --- 00:35:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.505 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1757544 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1757544 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1757544 ']' 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.505 17:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 [2024-12-06 17:51:05.583248] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:14.505 [2024-12-06 17:51:05.584403] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:14.505 [2024-12-06 17:51:05.584457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.505 [2024-12-06 17:51:05.686593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.505 [2024-12-06 17:51:05.736700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.505 [2024-12-06 17:51:05.736751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.505 [2024-12-06 17:51:05.736760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.505 [2024-12-06 17:51:05.736768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.505 [2024-12-06 17:51:05.736774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.505 [2024-12-06 17:51:05.737512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.505 [2024-12-06 17:51:05.816141] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:14.505 [2024-12-06 17:51:05.816418] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 [2024-12-06 17:51:06.446374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 Malloc0 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.505 [2024-12-06 17:51:06.530525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1757574 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1757574 /var/tmp/bdevperf.sock 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1757574 ']' 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:14.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.505 17:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:14.767 [2024-12-06 17:51:06.589420] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:14.767 [2024-12-06 17:51:06.589485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757574 ] 00:35:14.767 [2024-12-06 17:51:06.681512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.767 [2024-12-06 17:51:06.734779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.708 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.708 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:15.709 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:15.709 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.709 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:15.709 NVMe0n1 00:35:15.709 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.709 17:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:15.709 Running I/O for 10 seconds... 00:35:18.038 8637.00 IOPS, 33.74 MiB/s [2024-12-06T16:51:11.047Z] 8714.50 IOPS, 34.04 MiB/s [2024-12-06T16:51:11.988Z] 9898.33 IOPS, 38.67 MiB/s [2024-12-06T16:51:12.939Z] 10752.25 IOPS, 42.00 MiB/s [2024-12-06T16:51:13.883Z] 11274.40 IOPS, 44.04 MiB/s [2024-12-06T16:51:14.826Z] 11654.33 IOPS, 45.52 MiB/s [2024-12-06T16:51:15.768Z] 11977.00 IOPS, 46.79 MiB/s [2024-12-06T16:51:17.150Z] 12154.88 IOPS, 47.48 MiB/s [2024-12-06T16:51:18.088Z] 12297.00 IOPS, 48.04 MiB/s [2024-12-06T16:51:18.088Z] 12436.40 IOPS, 48.58 MiB/s 00:35:26.022 Latency(us) 00:35:26.022 [2024-12-06T16:51:18.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.022 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:26.022 Verification LBA range: start 0x0 length 0x4000 00:35:26.022 NVMe0n1 : 10.05 12469.33 48.71 0.00 0.00 81820.59 11960.32 76021.76 00:35:26.022 [2024-12-06T16:51:18.088Z] =================================================================================================================== 00:35:26.022 [2024-12-06T16:51:18.088Z] Total : 12469.33 48.71 0.00 0.00 81820.59 11960.32 76021.76 00:35:26.022 { 00:35:26.022 "results": [ 00:35:26.022 { 00:35:26.022 "job": "NVMe0n1", 00:35:26.022 "core_mask": "0x1", 00:35:26.022 "workload": "verify", 00:35:26.022 "status": "finished", 00:35:26.022 "verify_range": { 00:35:26.022 "start": 0, 00:35:26.022 "length": 16384 00:35:26.022 }, 00:35:26.022 "queue_depth": 1024, 00:35:26.022 "io_size": 4096, 00:35:26.022 "runtime": 10.045526, 00:35:26.022 "iops": 12469.332118596876, 00:35:26.022 "mibps": 48.70832858826905, 00:35:26.022 "io_failed": 0, 00:35:26.022 "io_timeout": 0, 00:35:26.022 "avg_latency_us": 81820.59480412898, 00:35:26.022 "min_latency_us": 11960.32, 00:35:26.022 "max_latency_us": 76021.76 00:35:26.023 } 00:35:26.023 ], 00:35:26.023 "core_count": 1 00:35:26.023 } 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1757574 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1757574 ']' 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1757574 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1757574 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1757574' 00:35:26.023 killing process with pid 1757574 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1757574 00:35:26.023 Received shutdown signal, test time was about 10.000000 seconds 00:35:26.023 00:35:26.023 Latency(us) 00:35:26.023 [2024-12-06T16:51:18.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.023 [2024-12-06T16:51:18.089Z] =================================================================================================================== 00:35:26.023 [2024-12-06T16:51:18.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1757574 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:26.023 17:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:26.023 rmmod nvme_tcp 00:35:26.023 rmmod nvme_fabrics 00:35:26.023 rmmod nvme_keyring 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1757544 ']' 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1757544 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1757544 ']' 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1757544 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1757544 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1757544' 00:35:26.023 killing process with pid 1757544 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1757544 00:35:26.023 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1757544 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.283 17:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.828 00:35:28.828 real 0m22.374s 00:35:28.828 user 0m24.707s 00:35:28.828 sys 0m7.378s 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:28.828 ************************************ 00:35:28.828 END TEST nvmf_queue_depth 00:35:28.828 ************************************ 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:28.828 ************************************ 00:35:28.828 START TEST nvmf_target_multipath 00:35:28.828 ************************************ 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:28.828 * Looking for test storage... 00:35:28.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.828 --rc genhtml_branch_coverage=1 00:35:28.828 --rc genhtml_function_coverage=1 00:35:28.828 --rc genhtml_legend=1 00:35:28.828 --rc geninfo_all_blocks=1 00:35:28.828 --rc geninfo_unexecuted_blocks=1 00:35:28.828 00:35:28.828 ' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.828 --rc genhtml_branch_coverage=1 00:35:28.828 --rc genhtml_function_coverage=1 00:35:28.828 --rc genhtml_legend=1 00:35:28.828 --rc geninfo_all_blocks=1 00:35:28.828 --rc geninfo_unexecuted_blocks=1 00:35:28.828 00:35:28.828 ' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.828 --rc genhtml_branch_coverage=1 00:35:28.828 --rc genhtml_function_coverage=1 00:35:28.828 --rc genhtml_legend=1 00:35:28.828 --rc geninfo_all_blocks=1 00:35:28.828 --rc geninfo_unexecuted_blocks=1 00:35:28.828 00:35:28.828 ' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:28.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.828 --rc genhtml_branch_coverage=1 00:35:28.828 --rc genhtml_function_coverage=1 00:35:28.828 --rc genhtml_legend=1 00:35:28.828 --rc geninfo_all_blocks=1 00:35:28.828 --rc geninfo_unexecuted_blocks=1 00:35:28.828 00:35:28.828 ' 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:28.828 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.829 17:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:36.991 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:36.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:36.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:36.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:36.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:36.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:35:36.992 00:35:36.992 --- 10.0.0.2 ping statistics --- 00:35:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.992 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:35:36.992 00:35:36.992 --- 10.0.0.1 ping statistics --- 00:35:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.992 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:36.992 only one NIC for nvmf test 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.992 17:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.992 rmmod nvme_tcp 00:35:36.992 rmmod nvme_fabrics 00:35:36.992 rmmod nvme_keyring 00:35:36.992 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.992 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.993 17:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.380 00:35:38.380 real 0m9.778s 00:35:38.380 user 0m2.096s 00:35:38.380 sys 0m5.641s 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:38.380 ************************************ 00:35:38.380 END TEST nvmf_target_multipath 00:35:38.380 ************************************ 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:38.380 ************************************ 00:35:38.380 START TEST nvmf_zcopy 00:35:38.380 ************************************ 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:38.380 * Looking for test storage... 00:35:38.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:38.380 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:38.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.381 --rc genhtml_branch_coverage=1 00:35:38.381 --rc genhtml_function_coverage=1 00:35:38.381 --rc genhtml_legend=1 00:35:38.381 --rc geninfo_all_blocks=1 00:35:38.381 --rc geninfo_unexecuted_blocks=1 00:35:38.381 00:35:38.381 ' 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:38.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.381 --rc genhtml_branch_coverage=1 00:35:38.381 --rc genhtml_function_coverage=1 00:35:38.381 --rc genhtml_legend=1 00:35:38.381 --rc geninfo_all_blocks=1 00:35:38.381 --rc geninfo_unexecuted_blocks=1 00:35:38.381 00:35:38.381 ' 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:38.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.381 --rc genhtml_branch_coverage=1 00:35:38.381 --rc genhtml_function_coverage=1 00:35:38.381 --rc genhtml_legend=1 00:35:38.381 --rc geninfo_all_blocks=1 00:35:38.381 --rc geninfo_unexecuted_blocks=1 00:35:38.381 00:35:38.381 ' 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:38.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.381 --rc genhtml_branch_coverage=1 00:35:38.381 --rc genhtml_function_coverage=1 00:35:38.381 --rc genhtml_legend=1 00:35:38.381 --rc geninfo_all_blocks=1 00:35:38.381 --rc geninfo_unexecuted_blocks=1 00:35:38.381 00:35:38.381 ' 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.381 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.643 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.363 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:45.364 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:45.364 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:45.364 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:45.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.364 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:35:45.625 00:35:45.625 --- 10.0.0.2 ping statistics --- 00:35:45.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.625 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:35:45.625 00:35:45.625 --- 10.0.0.1 ping statistics --- 00:35:45.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.625 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1763049 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1763049 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1763049 ']' 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.625 17:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:45.885 [2024-12-06 17:51:37.732338] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.885 [2024-12-06 17:51:37.733451] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:45.885 [2024-12-06 17:51:37.733503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.885 [2024-12-06 17:51:37.830912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.885 [2024-12-06 17:51:37.880453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.885 [2024-12-06 17:51:37.880504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.885 [2024-12-06 17:51:37.880513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.885 [2024-12-06 17:51:37.880521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.885 [2024-12-06 17:51:37.880527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.885 [2024-12-06 17:51:37.881243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.146 [2024-12-06 17:51:37.958721] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:46.146 [2024-12-06 17:51:37.958991] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 [2024-12-06 17:51:38.578080] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 [2024-12-06 17:51:38.606260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 malloc0 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:46.716 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:46.716 { 00:35:46.716 "params": { 00:35:46.716 "name": "Nvme$subsystem", 00:35:46.716 "trtype": "$TEST_TRANSPORT", 00:35:46.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.716 "adrfam": "ipv4", 00:35:46.716 "trsvcid": "$NVMF_PORT", 00:35:46.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.716 "hdgst": ${hdgst:-false}, 00:35:46.716 "ddgst": ${ddgst:-false} 00:35:46.716 }, 00:35:46.717 "method": "bdev_nvme_attach_controller" 00:35:46.717 } 00:35:46.717 EOF 00:35:46.717 )") 00:35:46.717 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:46.717 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:46.717 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:46.717 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:46.717 "params": { 00:35:46.717 "name": "Nvme1", 00:35:46.717 "trtype": "tcp", 00:35:46.717 "traddr": "10.0.0.2", 00:35:46.717 "adrfam": "ipv4", 00:35:46.717 "trsvcid": "4420", 00:35:46.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.717 "hdgst": false, 00:35:46.717 "ddgst": false 00:35:46.717 }, 00:35:46.717 "method": "bdev_nvme_attach_controller" 00:35:46.717 }' 00:35:46.717 [2024-12-06 17:51:38.704732] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:46.717 [2024-12-06 17:51:38.704781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763087 ] 00:35:46.977 [2024-12-06 17:51:38.792760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.977 [2024-12-06 17:51:38.829847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.977 Running I/O for 10 seconds... 00:35:49.299 6586.00 IOPS, 51.45 MiB/s [2024-12-06T16:51:42.305Z] 6594.50 IOPS, 51.52 MiB/s [2024-12-06T16:51:43.248Z] 6644.67 IOPS, 51.91 MiB/s [2024-12-06T16:51:44.188Z] 6793.25 IOPS, 53.07 MiB/s [2024-12-06T16:51:45.130Z] 7384.20 IOPS, 57.69 MiB/s [2024-12-06T16:51:46.068Z] 7771.33 IOPS, 60.71 MiB/s [2024-12-06T16:51:47.009Z] 8046.86 IOPS, 62.87 MiB/s [2024-12-06T16:51:48.391Z] 8253.88 IOPS, 64.48 MiB/s [2024-12-06T16:51:49.334Z] 8415.56 IOPS, 65.75 MiB/s [2024-12-06T16:51:49.334Z] 8547.00 IOPS, 66.77 MiB/s 00:35:57.268 Latency(us) 00:35:57.268 [2024-12-06T16:51:49.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:57.268 Verification LBA range: start 0x0 length 0x1000 00:35:57.268 Nvme1n1 : 10.01 8549.42 66.79 0.00 0.00 14926.15 1549.65 27415.89 00:35:57.268 [2024-12-06T16:51:49.334Z] =================================================================================================================== 00:35:57.268 [2024-12-06T16:51:49.334Z] Total : 8549.42 66.79 0.00 0.00 14926.15 1549.65 27415.89 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1763207 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:57.268 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:57.268 { 00:35:57.268 "params": { 00:35:57.268 "name": "Nvme$subsystem", 00:35:57.268 "trtype": "$TEST_TRANSPORT", 00:35:57.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.268 "adrfam": "ipv4", 00:35:57.268 "trsvcid": "$NVMF_PORT", 00:35:57.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.268 "hdgst": ${hdgst:-false}, 00:35:57.268 "ddgst": ${ddgst:-false} 00:35:57.268 }, 00:35:57.268 "method": "bdev_nvme_attach_controller" 00:35:57.268 } 00:35:57.268 EOF 00:35:57.268 )") 00:35:57.269 [2024-12-06 17:51:49.113647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.113674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:57.269 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:57.269 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:57.269 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:57.269 "params": { 00:35:57.269 "name": "Nvme1", 00:35:57.269 "trtype": "tcp", 00:35:57.269 "traddr": "10.0.0.2", 00:35:57.269 "adrfam": "ipv4", 00:35:57.269 "trsvcid": "4420", 00:35:57.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.269 "hdgst": false, 00:35:57.269 "ddgst": false 00:35:57.269 }, 00:35:57.269 "method": "bdev_nvme_attach_controller" 00:35:57.269 }' 00:35:57.269 [2024-12-06 17:51:49.125615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.125625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.137612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.137619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.149611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.149619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.156677] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:35:57.269 [2024-12-06 17:51:49.156725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763207 ] 00:35:57.269 [2024-12-06 17:51:49.161612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.161619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.173611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.173620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.185612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.185620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.197612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.197620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.209611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.209619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.221612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.221619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.233611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.233618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.237798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.269 [2024-12-06 17:51:49.245612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.245621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.257612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.257620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.266743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.269 [2024-12-06 17:51:49.269611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.269620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.281619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.281630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.293618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.293630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.305614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.305625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.317613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.317622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.269 [2024-12-06 17:51:49.329611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.269 [2024-12-06 17:51:49.329619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.341618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.341634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.353613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.353622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.365613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.365623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.377612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.377621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.389612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.389619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.401611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.401618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.413613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.413624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.425612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.425621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.437611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.437619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.449611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.449619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.461612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.461621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.473611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.473618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.485615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.485627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.497611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.497618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.510955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.510968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.521614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.521624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 Running I/O for 5 seconds... 00:35:57.531 [2024-12-06 17:51:49.536294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.536312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.549523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.549540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.562266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.562283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.577055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.577071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.531 [2024-12-06 17:51:49.590201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.531 [2024-12-06 17:51:49.590215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.604307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.604322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.617335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.617351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.630170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.630185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.644933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.644949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.657607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.657623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.670495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.670510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.684480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.684496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.697673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.697688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.710678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.710694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.724710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.724726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.738047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.738062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.753513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.753529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.766375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.766390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.781071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.781086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.794271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.794287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.808345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.808362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.821262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.821277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.834972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.834988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:57.793 [2024-12-06 17:51:49.848769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:57.793 [2024-12-06 17:51:49.848784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.861656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.861672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.874335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.874350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.889004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.889020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.901915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.901929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.916880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.916895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.930200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.930215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.944204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.944219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.957427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.957442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.970566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.970581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.984470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.984486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:49.997302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:49.997317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.009941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.009957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.024705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.024721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.037848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.037864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.050776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.050792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.064736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.064751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.078282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.078297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.092746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.092761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.054 [2024-12-06 17:51:50.105851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.054 [2024-12-06 17:51:50.105866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.118943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.118959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.132811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.132827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.145917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.145932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.160542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.160558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.173835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.173850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.186654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.186668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.200871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.200886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.213943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.213958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.228682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.228698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.241814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.241830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.254263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.254277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.268977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.268992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.282228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.282243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.296574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.296589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.309856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.309871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.322627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.322647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.337134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.314 [2024-12-06 17:51:50.337148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.314 [2024-12-06 17:51:50.350308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.315 [2024-12-06 17:51:50.350322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.315 [2024-12-06 17:51:50.365164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.315 [2024-12-06 17:51:50.365179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.315 [2024-12-06 17:51:50.378393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.315 [2024-12-06 17:51:50.378409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.392741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.392757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.405851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.405866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.419237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.419252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.433144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.433160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.446558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.446575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.460844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.460860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.474114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.474129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.488940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.488955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.502404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.502423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.517430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.517445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 18843.00 IOPS, 147.21 MiB/s [2024-12-06T16:51:50.641Z] [2024-12-06 17:51:50.530669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.530685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.575 [2024-12-06 17:51:50.544490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.575 [2024-12-06 17:51:50.544506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.557803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.557818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.570701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.570715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.584495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.584510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.597903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.597917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.612672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.612688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.625726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.625741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.576 [2024-12-06 17:51:50.637845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.576 [2024-12-06 17:51:50.637860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.650574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.650591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.665544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.665560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.678510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.678525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.692950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.692965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.706170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.706185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.720976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.720991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.734400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.734415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.749255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.749270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.762454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.762472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.776506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.776521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.789926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.789941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.805233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.805248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.818592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.818607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.833062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.833078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.846475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.846490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.861138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.861153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.874296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.874310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.888716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.888731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:58.838 [2024-12-06 17:51:50.901716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:58.838 [2024-12-06 17:51:50.901731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.914649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.914665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.928621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.928641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.941775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.941790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.954471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.954485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.969241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.969256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.982450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.982466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:50.997239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:50.997254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.010243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.010257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.024884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.024905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.038165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.038180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.052141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.052156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.065196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.065211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.078023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.078037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.092713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.092731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.105987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.106001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.120696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.120711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.133854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.133869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.100 [2024-12-06 17:51:51.146218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.100 [2024-12-06 17:51:51.146232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.101 [2024-12-06 17:51:51.160944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.101 [2024-12-06 17:51:51.160959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.174064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.174078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.189154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.189169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.202468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.202482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.217309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.217324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.230697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.230712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.245461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.245476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.258659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.258674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.273713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.273728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.286900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.286915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.301484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.301499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.314745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.314760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.329051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.329067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.342392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.342406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.356840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.356855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.369961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.369976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.384783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.384798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.397690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.397705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.410350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.410365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.363 [2024-12-06 17:51:51.424746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.363 [2024-12-06 17:51:51.424761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.437800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.437816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.450704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.450719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.465121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.465136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.478273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.478290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.492565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.492582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.505747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.505763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.518343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.518358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 18852.50 IOPS, 147.29 MiB/s [2024-12-06T16:51:51.690Z] [2024-12-06 17:51:51.533046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.533062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.546115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.546130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.560512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.560527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.573449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.573465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.586468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.586483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.601077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.601092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.614329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.614344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.629017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.629032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.642050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.642064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.656736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.656752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.669825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.669839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.624 [2024-12-06 17:51:51.682438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.624 [2024-12-06 17:51:51.682453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.696408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.696423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.709215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.709230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.721484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.721502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.734272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.734286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.748104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.748119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.761168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.761182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.774021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.774038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.788393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.788413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.801762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.801778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.813928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.813943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.885 [2024-12-06 17:51:51.828347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.885 [2024-12-06 17:51:51.828362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.841351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.841366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.854136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.854150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.868746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.868761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.881871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.881886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.897052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.897068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.910192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.910206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.925228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.925243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:59.886 [2024-12-06 17:51:51.938183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:59.886 [2024-12-06 17:51:51.938198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.147 [2024-12-06 17:51:51.952729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.147 [2024-12-06 17:51:51.952744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.147 [2024-12-06 17:51:51.965782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.147 [2024-12-06 17:51:51.965796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.147 [2024-12-06 17:51:51.978041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.147 [2024-12-06 17:51:51.978055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.147 [2024-12-06 17:51:51.992829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.147 [2024-12-06 17:51:51.992844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.006032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.006046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.021158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.021173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.034336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.034351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.048881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.048900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.062247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.062262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.076947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.076963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.090409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.090423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.104753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.104767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.117931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.117946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.132952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.132968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.146441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.146456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.161342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.161357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.174452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.174467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.189102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.189117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.148 [2024-12-06 17:51:52.202287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.148 [2024-12-06 17:51:52.202302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.216825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.216842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.229910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.229924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.244688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.244703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.258084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.258098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.272965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.272981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.286135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.286151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.301158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.301173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.314274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.314293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.328989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.329004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.342320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.342334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.357210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.357225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.370094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.370108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.384717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.384732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.398308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.398324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.412954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.412969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.425778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.425794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.438211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.438226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.452622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.452642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.409 [2024-12-06 17:51:52.465915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.409 [2024-12-06 17:51:52.465930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.481051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.481067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.494284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.494300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.508927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.508942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.522351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.522366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 18857.33 IOPS, 147.32 MiB/s [2024-12-06T16:51:52.736Z] [2024-12-06 17:51:52.536308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.536324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.548997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.549012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.561930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.561944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.576360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.576375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.589359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.589376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.602228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.602242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.616859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.616874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.629926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.629940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.644462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.644476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.657521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.657535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.670444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.670458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.684832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.684847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.698094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.698108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.713177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.713192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.670 [2024-12-06 17:51:52.726329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.670 [2024-12-06 17:51:52.726344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.741074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.741090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.754133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.754147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.769002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.769017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.781826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.781841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.793945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.793960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.808604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.808619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.821466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.821480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.834678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.834693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.849251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.849266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.862449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.862464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.876555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.876571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.889686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.889701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.902738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.902753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.916959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.916974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.930203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.930219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.945329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.945344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.958439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.958454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.972857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.972872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:00.931 [2024-12-06 17:51:52.986042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:00.931 [2024-12-06 17:51:52.986057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.000988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.001003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.014078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.014094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.028898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.028914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.042068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.042082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.056702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.056717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.069892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.069906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.084892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.084906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.097825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.097841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.110580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.110594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.125092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.125107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.138569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.138584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.152946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.152964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.166115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.166129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.180800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.180815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.194240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.194254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.209104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.209119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.222635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.222655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.237279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.237294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.192 [2024-12-06 17:51:53.250395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.192 [2024-12-06 17:51:53.250411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.264745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.264761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.277987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.278001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.292627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.292650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.306016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.306030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.321227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.321242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.334702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.334716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.349045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.349059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.362437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.362452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.377363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.377378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.390679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.390694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.404943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.404959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.418159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.418174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.433026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.433040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.446368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.446383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.453 [2024-12-06 17:51:53.460203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.453 [2024-12-06 17:51:53.460218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.454 [2024-12-06 17:51:53.472899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.454 [2024-12-06 17:51:53.472913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.454 [2024-12-06 17:51:53.485522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.454 [2024-12-06 17:51:53.485536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.454 [2024-12-06 17:51:53.498700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.454 [2024-12-06 17:51:53.498714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.454 [2024-12-06 17:51:53.512723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.454 [2024-12-06 17:51:53.512738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.525796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.525815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 18871.25 IOPS, 147.43 MiB/s [2024-12-06T16:51:53.781Z] [2024-12-06 17:51:53.538788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.538809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.552901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.552916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.566108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.566123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.580948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.580964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.593990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.594004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.608631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.608655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.621589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.621604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.634557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.634572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.649191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.649205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.662574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.662589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.676303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.676317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.689296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.689311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.702590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.702605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.717247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.717262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.730222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.730237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.745254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.745269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.758621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.758635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.715 [2024-12-06 17:51:53.772554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.715 [2024-12-06 17:51:53.772569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.785374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.785389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.798515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.798529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.813035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.813050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.825930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.825945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.841054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.841069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.854208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.854223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.869192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.869212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.882600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.882615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.896869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.896884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.910055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.910070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.924620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.924635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.937764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.937780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.950563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.950579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.964549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.964564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.977499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.977514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:53.990001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:53.990015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:54.004596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:54.004614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:54.017656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:54.017671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.976 [2024-12-06 17:51:54.030462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:01.976 [2024-12-06 17:51:54.030477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.044402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.044418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.057666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.057680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.070607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.070622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.084690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.084706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.097788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.097803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.110460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.110475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.124986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.125007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.137933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.137947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.153013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.153028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.165845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.165860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.177951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.177965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.192676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.192690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.205910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.205924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.220387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.220402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.233623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.233642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.246345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.246360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.261193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.261208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.274176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.274191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.288532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.288547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.237 [2024-12-06 17:51:54.301412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.237 [2024-12-06 17:51:54.301427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.314081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.314096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.328972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.328989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.342153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.342169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.356793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.356809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.369783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.369798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.382630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.382649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.397157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.397173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.410491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.410507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.424405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.424421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.437687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.437703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.450680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.450696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.464797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.464812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.477744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.477759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.490833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.490848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.505077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.505092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.518528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.518543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 [2024-12-06 17:51:54.533564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.533579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 18890.80 IOPS, 147.58 MiB/s [2024-12-06T16:51:54.564Z] [2024-12-06 17:51:54.542386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.498 [2024-12-06 17:51:54.542400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.498 00:36:02.498 Latency(us) 00:36:02.498 [2024-12-06T16:51:54.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.499 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:02.499 Nvme1n1 : 5.01 18898.25 147.64 0.00 0.00 6767.18 2116.27 11741.87 00:36:02.499 [2024-12-06T16:51:54.565Z] =================================================================================================================== 00:36:02.499 [2024-12-06T16:51:54.565Z] Total : 18898.25 147.64 0.00 0.00 6767.18 2116.27 11741.87 00:36:02.499 [2024-12-06 17:51:54.553615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.499 [2024-12-06 17:51:54.553628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.565621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.565634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.577616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.577629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.589618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.589629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.601613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.601623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.613613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.613623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.625614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.625623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 [2024-12-06 17:51:54.637613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:02.759 [2024-12-06 17:51:54.637622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:02.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1763207) - No such process 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1763207 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.759 delay0 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.759 17:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:03.020 [2024-12-06 17:51:54.847681] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:09.596 Initializing NVMe Controllers 00:36:09.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:09.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:09.596 Initialization complete. Launching workers. 00:36:09.596 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1211 00:36:09.596 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1479, failed to submit 52 00:36:09.596 success 1305, unsuccessful 174, failed 0 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.596 rmmod nvme_tcp 00:36:09.596 rmmod nvme_fabrics 00:36:09.596 rmmod nvme_keyring 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1763049 ']' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1763049 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1763049 ']' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1763049 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763049 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763049' 00:36:09.596 killing process with pid 1763049 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1763049 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1763049 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.596 17:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.139 00:36:12.139 real 0m33.371s 00:36:12.139 user 0m42.446s 00:36:12.139 sys 0m12.166s 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:12.139 ************************************ 00:36:12.139 END TEST nvmf_zcopy 00:36:12.139 ************************************ 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:12.139 ************************************ 00:36:12.139 START TEST nvmf_nmic 00:36:12.139 ************************************ 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:12.139 * Looking for test storage... 00:36:12.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.139 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:12.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.140 --rc genhtml_branch_coverage=1 00:36:12.140 --rc genhtml_function_coverage=1 00:36:12.140 --rc genhtml_legend=1 00:36:12.140 --rc geninfo_all_blocks=1 00:36:12.140 --rc geninfo_unexecuted_blocks=1 00:36:12.140 00:36:12.140 ' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:12.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.140 --rc genhtml_branch_coverage=1 00:36:12.140 --rc genhtml_function_coverage=1 00:36:12.140 --rc genhtml_legend=1 00:36:12.140 --rc geninfo_all_blocks=1 00:36:12.140 --rc geninfo_unexecuted_blocks=1 00:36:12.140 00:36:12.140 ' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:12.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.140 --rc genhtml_branch_coverage=1 00:36:12.140 --rc genhtml_function_coverage=1 00:36:12.140 --rc genhtml_legend=1 00:36:12.140 --rc geninfo_all_blocks=1 00:36:12.140 --rc geninfo_unexecuted_blocks=1 00:36:12.140 00:36:12.140 ' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:12.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.140 --rc genhtml_branch_coverage=1 00:36:12.140 --rc genhtml_function_coverage=1 00:36:12.140 --rc genhtml_legend=1 00:36:12.140 --rc geninfo_all_blocks=1 00:36:12.140 --rc geninfo_unexecuted_blocks=1 00:36:12.140 00:36:12.140 ' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:12.140 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:12.141 17:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:18.729 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.729 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:18.729 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:18.729 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:18.729 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:18.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:18.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:18.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:18.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:18.730 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:18.992 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:18.992 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:18.992 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:18.992 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:18.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:36:18.992 00:36:18.992 --- 10.0.0.2 ping statistics --- 00:36:18.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.992 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:18.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:36:18.992 00:36:18.992 --- 10.0.0.1 ping statistics --- 00:36:18.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.992 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:18.992 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1765790 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1765790 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1765790 ']' 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.253 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:19.253 [2024-12-06 17:52:11.161417] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.253 [2024-12-06 17:52:11.162573] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:36:19.253 [2024-12-06 17:52:11.162628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.253 [2024-12-06 17:52:11.263883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:19.253 [2024-12-06 17:52:11.316961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.253 [2024-12-06 17:52:11.317014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.253 [2024-12-06 17:52:11.317023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.253 [2024-12-06 17:52:11.317030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.253 [2024-12-06 17:52:11.317036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.514 [2024-12-06 17:52:11.319033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.514 [2024-12-06 17:52:11.319193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.514 [2024-12-06 17:52:11.319356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.514 [2024-12-06 17:52:11.319356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.514 [2024-12-06 17:52:11.397469] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.514 [2024-12-06 17:52:11.398517] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.514 [2024-12-06 17:52:11.398783] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:19.514 [2024-12-06 17:52:11.399242] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:19.514 [2024-12-06 17:52:11.399304] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:20.087 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.087 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:20.087 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:20.087 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:20.087 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.087 [2024-12-06 17:52:12.016212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.087 Malloc0 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:20.087 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.088 [2024-12-06 17:52:12.108546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:20.088 test case1: single bdev can't be used in multiple subsystems 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.088 [2024-12-06 17:52:12.143845] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:20.088 [2024-12-06 17:52:12.143882] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:20.088 [2024-12-06 17:52:12.143891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:20.088 request: 00:36:20.088 { 00:36:20.088 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:20.088 "namespace": { 00:36:20.088 "bdev_name": "Malloc0", 00:36:20.088 "no_auto_visible": false, 00:36:20.088 "hide_metadata": false 00:36:20.088 }, 00:36:20.088 "method": "nvmf_subsystem_add_ns", 00:36:20.088 "req_id": 1 00:36:20.088 } 00:36:20.088 Got JSON-RPC error response 00:36:20.088 response: 00:36:20.088 { 00:36:20.088 "code": -32602, 00:36:20.088 "message": "Invalid parameters" 00:36:20.088 } 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:20.088 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:20.088 Adding namespace failed - expected result. 00:36:20.350 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:20.350 test case2: host connect to nvmf target in multiple paths 00:36:20.350 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:20.350 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.350 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:20.350 [2024-12-06 17:52:12.156002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:20.350 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.350 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:20.610 17:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:21.182 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:21.182 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:21.182 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:21.182 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:21.182 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:23.096 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:23.096 [global] 00:36:23.096 thread=1 00:36:23.096 invalidate=1 00:36:23.096 rw=write 00:36:23.096 time_based=1 00:36:23.096 runtime=1 00:36:23.096 ioengine=libaio 00:36:23.096 direct=1 00:36:23.096 bs=4096 00:36:23.096 iodepth=1 00:36:23.096 norandommap=0 00:36:23.096 numjobs=1 00:36:23.096 00:36:23.096 verify_dump=1 00:36:23.096 verify_backlog=512 00:36:23.096 verify_state_save=0 00:36:23.096 do_verify=1 00:36:23.096 verify=crc32c-intel 00:36:23.096 [job0] 00:36:23.096 filename=/dev/nvme0n1 00:36:23.096 Could not set queue depth (nvme0n1) 00:36:23.663 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:23.663 fio-3.35 00:36:23.663 Starting 1 thread 00:36:24.604 00:36:24.604 job0: (groupid=0, jobs=1): err= 0: pid=1766036: Fri Dec 6 17:52:16 2024 00:36:24.604 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:24.604 slat (nsec): min=6476, max=40245, avg=26119.44, stdev=1781.20 00:36:24.604 clat (usec): min=679, max=1206, avg=961.11, stdev=41.92 00:36:24.604 lat (usec): min=686, max=1232, avg=987.23, stdev=42.35 00:36:24.604 clat percentiles (usec): 00:36:24.604 | 1.00th=[ 857], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 938], 00:36:24.604 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 963], 00:36:24.604 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1004], 95.00th=[ 1029], 00:36:24.604 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1205], 99.95th=[ 1205], 00:36:24.604 | 99.99th=[ 1205] 00:36:24.604 write: IOPS=823, BW=3293KiB/s (3372kB/s)(3296KiB/1001msec); 0 zone resets 00:36:24.604 slat (nsec): min=9145, max=65807, avg=28804.67, stdev=10357.15 00:36:24.604 clat (usec): min=262, max=784, avg=560.22, stdev=88.33 00:36:24.604 lat (usec): min=272, max=816, avg=589.03, stdev=94.05 00:36:24.604 clat percentiles (usec): 00:36:24.604 | 1.00th=[ 330], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 490], 00:36:24.604 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:36:24.604 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 685], 00:36:24.604 | 99.00th=[ 725], 99.50th=[ 725], 99.90th=[ 783], 99.95th=[ 783], 00:36:24.604 | 99.99th=[ 783] 00:36:24.604 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:24.604 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:24.604 lat (usec) : 500=14.37%, 750=47.23%, 1000=33.76% 00:36:24.604 lat (msec) : 2=4.64% 00:36:24.604 cpu : usr=3.30%, sys=4.30%, ctx=1336, majf=0, minf=1 00:36:24.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.604 issued rwts: total=512,824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:24.604 00:36:24.604 Run status group 0 (all jobs): 00:36:24.604 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:36:24.604 WRITE: bw=3293KiB/s (3372kB/s), 3293KiB/s-3293KiB/s (3372kB/s-3372kB/s), io=3296KiB (3375kB), run=1001-1001msec 00:36:24.604 00:36:24.604 Disk stats (read/write): 00:36:24.604 nvme0n1: ios=562/649, merge=0/0, ticks=524/295, in_queue=819, util=93.49% 00:36:24.604 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:24.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.865 rmmod nvme_tcp 00:36:24.865 rmmod nvme_fabrics 00:36:24.865 rmmod nvme_keyring 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1765790 ']' 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1765790 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1765790 ']' 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1765790 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.865 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765790 00:36:25.126 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.126 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.126 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765790' 00:36:25.126 killing process with pid 1765790 00:36:25.126 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1765790 00:36:25.126 17:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1765790 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:25.126 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:27.674 00:36:27.674 real 0m15.465s 00:36:27.674 user 0m37.432s 00:36:27.674 sys 0m7.174s 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:27.674 ************************************ 00:36:27.674 END TEST nvmf_nmic 00:36:27.674 ************************************ 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:27.674 ************************************ 00:36:27.674 START TEST nvmf_fio_target 00:36:27.674 ************************************ 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:27.674 * Looking for test storage... 00:36:27.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:27.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.674 --rc genhtml_branch_coverage=1 00:36:27.674 --rc genhtml_function_coverage=1 00:36:27.674 --rc genhtml_legend=1 00:36:27.674 --rc geninfo_all_blocks=1 00:36:27.674 --rc geninfo_unexecuted_blocks=1 00:36:27.674 00:36:27.674 ' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:27.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.674 --rc genhtml_branch_coverage=1 00:36:27.674 --rc genhtml_function_coverage=1 00:36:27.674 --rc genhtml_legend=1 00:36:27.674 --rc geninfo_all_blocks=1 00:36:27.674 --rc geninfo_unexecuted_blocks=1 00:36:27.674 00:36:27.674 ' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:27.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.674 --rc genhtml_branch_coverage=1 00:36:27.674 --rc genhtml_function_coverage=1 00:36:27.674 --rc genhtml_legend=1 00:36:27.674 --rc geninfo_all_blocks=1 00:36:27.674 --rc geninfo_unexecuted_blocks=1 00:36:27.674 00:36:27.674 ' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:27.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.674 --rc genhtml_branch_coverage=1 00:36:27.674 --rc genhtml_function_coverage=1 00:36:27.674 --rc genhtml_legend=1 00:36:27.674 --rc geninfo_all_blocks=1 00:36:27.674 --rc geninfo_unexecuted_blocks=1 00:36:27.674 00:36:27.674 ' 00:36:27.674 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:27.675 17:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:35.820 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:35.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:35.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:35.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:35.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:35.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:35.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:36:35.821 00:36:35.821 --- 10.0.0.2 ping statistics --- 00:36:35.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.821 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:35.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:35.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:36:35.821 00:36:35.821 --- 10.0.0.1 ping statistics --- 00:36:35.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.821 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1768496 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1768496 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1768496 ']' 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:35.822 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.822 [2024-12-06 17:52:26.764934] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:35.822 [2024-12-06 17:52:26.766060] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:36:35.822 [2024-12-06 17:52:26.766116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.822 [2024-12-06 17:52:26.865585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:35.822 [2024-12-06 17:52:26.918608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.822 [2024-12-06 17:52:26.918673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.822 [2024-12-06 17:52:26.918689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.822 [2024-12-06 17:52:26.918696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.822 [2024-12-06 17:52:26.918703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.822 [2024-12-06 17:52:26.920708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.822 [2024-12-06 17:52:26.920798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:35.822 [2024-12-06 17:52:26.920959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.822 [2024-12-06 17:52:26.920960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.822 [2024-12-06 17:52:26.999912] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:35.822 [2024-12-06 17:52:27.000897] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:35.822 [2024-12-06 17:52:27.001225] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:35.822 [2024-12-06 17:52:27.001797] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:35.822 [2024-12-06 17:52:27.001802] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:35.822 [2024-12-06 17:52:27.778066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.822 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:36.083 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:36.083 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:36.344 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:36.344 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:36.606 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:36.606 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:36.868 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:36.868 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:36.868 17:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:37.129 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:37.129 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:37.391 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:37.391 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:37.653 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:37.653 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:37.653 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:37.914 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:37.914 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:37.914 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:37.914 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:38.183 17:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.480 [2024-12-06 17:52:30.329997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.480 17:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:38.842 17:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:38.842 17:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:39.416 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:39.416 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:39.416 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:39.416 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:39.417 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:39.417 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:41.330 17:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:41.330 [global] 00:36:41.330 thread=1 00:36:41.330 invalidate=1 00:36:41.330 rw=write 00:36:41.330 time_based=1 00:36:41.330 runtime=1 00:36:41.330 ioengine=libaio 00:36:41.330 direct=1 00:36:41.330 bs=4096 00:36:41.330 iodepth=1 00:36:41.330 norandommap=0 00:36:41.330 numjobs=1 00:36:41.330 00:36:41.330 verify_dump=1 00:36:41.330 verify_backlog=512 00:36:41.330 verify_state_save=0 00:36:41.330 do_verify=1 00:36:41.330 verify=crc32c-intel 00:36:41.330 [job0] 00:36:41.330 filename=/dev/nvme0n1 00:36:41.330 [job1] 00:36:41.330 filename=/dev/nvme0n2 00:36:41.330 [job2] 00:36:41.330 filename=/dev/nvme0n3 00:36:41.330 [job3] 00:36:41.330 filename=/dev/nvme0n4 00:36:41.330 Could not set queue depth (nvme0n1) 00:36:41.330 Could not set queue depth (nvme0n2) 00:36:41.330 Could not set queue depth (nvme0n3) 00:36:41.330 Could not set queue depth (nvme0n4) 00:36:41.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:41.899 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:41.899 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:41.899 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:41.899 fio-3.35 00:36:41.899 Starting 4 threads 00:36:43.280 00:36:43.280 job0: (groupid=0, jobs=1): err= 0: pid=1768822: Fri Dec 6 17:52:34 2024 00:36:43.280 read: IOPS=562, BW=2250KiB/s (2304kB/s)(2252KiB/1001msec) 00:36:43.280 slat (nsec): min=6791, max=54276, avg=25370.88, stdev=3936.64 00:36:43.280 clat (usec): min=418, max=1185, avg=884.34, stdev=123.68 00:36:43.280 lat (usec): min=446, max=1210, avg=909.71, stdev=123.76 00:36:43.280 clat percentiles (usec): 00:36:43.280 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 775], 00:36:43.280 | 30.00th=[ 824], 40.00th=[ 873], 50.00th=[ 914], 60.00th=[ 938], 00:36:43.280 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1057], 00:36:43.280 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:36:43.280 | 99.99th=[ 1188] 00:36:43.280 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:36:43.280 slat (nsec): min=9216, max=70691, avg=29323.04, stdev=9758.44 00:36:43.280 clat (usec): min=203, max=736, avg=435.75, stdev=110.87 00:36:43.280 lat (usec): min=224, max=789, avg=465.08, stdev=113.99 00:36:43.280 clat percentiles (usec): 00:36:43.280 | 1.00th=[ 223], 5.00th=[ 262], 10.00th=[ 293], 20.00th=[ 338], 00:36:43.280 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 441], 60.00th=[ 469], 00:36:43.280 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 594], 95.00th=[ 627], 00:36:43.280 | 99.00th=[ 685], 99.50th=[ 693], 99.90th=[ 725], 99.95th=[ 734], 00:36:43.280 | 99.99th=[ 734] 00:36:43.280 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:36:43.280 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:43.280 lat (usec) : 250=2.52%, 500=44.49%, 750=23.31%, 1000=24.07% 00:36:43.280 lat (msec) : 2=5.61% 00:36:43.280 cpu : usr=2.20%, sys=4.60%, ctx=1588, majf=0, minf=1 00:36:43.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.280 issued rwts: total=563,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:43.280 job1: (groupid=0, jobs=1): err= 0: pid=1768823: Fri Dec 6 17:52:34 2024 00:36:43.280 read: IOPS=18, BW=74.9KiB/s (76.7kB/s)(76.0KiB/1015msec) 00:36:43.280 slat (nsec): min=26602, max=27174, avg=26782.53, stdev=150.82 00:36:43.280 clat (usec): min=40759, max=41863, avg=41011.83, stdev=220.79 00:36:43.280 lat (usec): min=40785, max=41890, avg=41038.61, stdev=220.77 00:36:43.280 clat percentiles (usec): 00:36:43.280 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:36:43.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:43.280 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:43.280 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:43.280 | 99.99th=[41681] 00:36:43.280 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:36:43.280 slat (nsec): min=9918, max=53311, avg=29334.36, stdev=10596.63 00:36:43.280 clat (usec): min=125, max=595, avg=423.86, stdev=78.51 00:36:43.280 lat (usec): min=159, max=630, avg=453.20, stdev=82.49 00:36:43.280 clat percentiles (usec): 00:36:43.280 | 1.00th=[ 215], 5.00th=[ 293], 10.00th=[ 322], 20.00th=[ 351], 00:36:43.280 | 30.00th=[ 375], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 457], 00:36:43.280 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:36:43.280 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 594], 99.95th=[ 594], 00:36:43.280 | 99.99th=[ 594] 00:36:43.280 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:36:43.280 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:43.280 lat (usec) : 250=2.26%, 500=80.04%, 750=14.12% 00:36:43.280 lat (msec) : 50=3.58% 00:36:43.280 cpu : usr=0.39%, sys=1.78%, ctx=532, majf=0, minf=1 00:36:43.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.280 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:43.280 job2: (groupid=0, jobs=1): err= 0: pid=1768824: Fri Dec 6 17:52:34 2024 00:36:43.280 read: IOPS=644, BW=2577KiB/s (2639kB/s)(2580KiB/1001msec) 00:36:43.280 slat (nsec): min=6873, max=45430, avg=23180.21, stdev=8109.44 00:36:43.280 clat (usec): min=442, max=1000, avg=784.02, stdev=85.83 00:36:43.280 lat (usec): min=468, max=1027, avg=807.20, stdev=88.07 00:36:43.280 clat percentiles (usec): 00:36:43.280 | 1.00th=[ 519], 5.00th=[ 627], 10.00th=[ 668], 20.00th=[ 717], 00:36:43.280 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:36:43.280 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 881], 95.00th=[ 906], 00:36:43.280 | 99.00th=[ 947], 99.50th=[ 955], 99.90th=[ 1004], 99.95th=[ 1004], 00:36:43.280 | 99.99th=[ 1004] 00:36:43.280 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:36:43.280 slat (nsec): min=9638, max=55448, avg=27806.01, stdev=10127.99 00:36:43.280 clat (usec): min=219, max=691, avg=429.73, stdev=82.95 00:36:43.280 lat (usec): min=231, max=725, avg=457.53, stdev=87.85 00:36:43.280 clat percentiles (usec): 00:36:43.280 | 1.00th=[ 239], 5.00th=[ 281], 10.00th=[ 318], 20.00th=[ 355], 00:36:43.280 | 30.00th=[ 388], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 453], 00:36:43.280 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 529], 95.00th=[ 562], 00:36:43.280 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[ 676], 99.95th=[ 693], 00:36:43.280 | 99.99th=[ 693] 00:36:43.280 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:36:43.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:43.281 lat (usec) : 250=1.02%, 500=50.27%, 750=21.27%, 1000=27.38% 00:36:43.281 lat (msec) : 2=0.06% 00:36:43.281 cpu : usr=1.90%, sys=4.80%, ctx=1669, majf=0, minf=1 00:36:43.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.281 issued rwts: total=645,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:43.281 job3: (groupid=0, jobs=1): err= 0: pid=1768825: Fri Dec 6 17:52:34 2024 00:36:43.281 read: IOPS=91, BW=364KiB/s (373kB/s)(368KiB/1010msec) 00:36:43.281 slat (nsec): min=7883, max=47010, avg=26206.87, stdev=4435.02 00:36:43.281 clat (usec): min=719, max=41914, avg=7360.33, stdev=14520.96 00:36:43.281 lat (usec): min=729, max=41941, avg=7386.54, stdev=14521.12 00:36:43.281 clat percentiles (usec): 00:36:43.281 | 1.00th=[ 717], 5.00th=[ 848], 10.00th=[ 930], 20.00th=[ 1012], 00:36:43.281 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:36:43.281 | 70.00th=[ 1139], 80.00th=[ 1221], 90.00th=[41157], 95.00th=[41157], 00:36:43.281 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:43.281 | 99.99th=[41681] 00:36:43.281 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:36:43.281 slat (nsec): min=10196, max=55744, avg=32590.65, stdev=8735.92 00:36:43.281 clat (usec): min=262, max=955, avg=603.11, stdev=131.56 00:36:43.281 lat (usec): min=273, max=995, avg=635.70, stdev=134.07 00:36:43.281 clat percentiles (usec): 00:36:43.281 | 1.00th=[ 273], 5.00th=[ 375], 10.00th=[ 441], 20.00th=[ 490], 00:36:43.281 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:36:43.281 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 824], 00:36:43.281 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:36:43.281 | 99.99th=[ 955] 00:36:43.281 bw ( KiB/s): min= 4096, max= 4096, per=33.83%, avg=4096.00, stdev= 0.00, samples=1 00:36:43.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:43.281 lat (usec) : 500=18.21%, 750=55.30%, 1000=14.07% 00:36:43.281 lat (msec) : 2=9.93%, 50=2.48% 00:36:43.281 cpu : usr=0.79%, sys=1.98%, ctx=605, majf=0, minf=1 00:36:43.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.281 issued rwts: total=92,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:43.281 00:36:43.281 Run status group 0 (all jobs): 00:36:43.281 READ: bw=5198KiB/s (5323kB/s), 74.9KiB/s-2577KiB/s (76.7kB/s-2639kB/s), io=5276KiB (5403kB), run=1001-1015msec 00:36:43.281 WRITE: bw=11.8MiB/s (12.4MB/s), 2018KiB/s-4092KiB/s (2066kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1015msec 00:36:43.281 00:36:43.281 Disk stats (read/write): 00:36:43.281 nvme0n1: ios=562/770, merge=0/0, ticks=512/308, in_queue=820, util=87.78% 00:36:43.281 nvme0n2: ios=37/512, merge=0/0, ticks=1540/221, in_queue=1761, util=97.24% 00:36:43.281 nvme0n3: ios=512/896, merge=0/0, ticks=391/371, in_queue=762, util=88.47% 00:36:43.281 nvme0n4: ios=109/512, merge=0/0, ticks=1416/296, in_queue=1712, util=97.11% 00:36:43.281 17:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:43.281 [global] 00:36:43.281 thread=1 00:36:43.281 invalidate=1 00:36:43.281 rw=randwrite 00:36:43.281 time_based=1 00:36:43.281 runtime=1 00:36:43.281 ioengine=libaio 00:36:43.281 direct=1 00:36:43.281 bs=4096 00:36:43.281 iodepth=1 00:36:43.281 norandommap=0 00:36:43.281 numjobs=1 00:36:43.281 00:36:43.281 verify_dump=1 00:36:43.281 verify_backlog=512 00:36:43.281 verify_state_save=0 00:36:43.281 do_verify=1 00:36:43.281 verify=crc32c-intel 00:36:43.281 [job0] 00:36:43.281 filename=/dev/nvme0n1 00:36:43.281 [job1] 00:36:43.281 filename=/dev/nvme0n2 00:36:43.281 [job2] 00:36:43.281 filename=/dev/nvme0n3 00:36:43.281 [job3] 00:36:43.281 filename=/dev/nvme0n4 00:36:43.281 Could not set queue depth (nvme0n1) 00:36:43.281 Could not set queue depth (nvme0n2) 00:36:43.281 Could not set queue depth (nvme0n3) 00:36:43.281 Could not set queue depth (nvme0n4) 00:36:43.541 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.541 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.541 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.541 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.541 fio-3.35 00:36:43.541 Starting 4 threads 00:36:44.956 00:36:44.956 job0: (groupid=0, jobs=1): err= 0: pid=1769028: Fri Dec 6 17:52:36 2024 00:36:44.956 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:44.956 slat (nsec): min=6721, max=62037, avg=24452.88, stdev=3921.07 00:36:44.956 clat (usec): min=479, max=1314, avg=947.94, stdev=147.03 00:36:44.956 lat (usec): min=504, max=1338, avg=972.40, stdev=147.58 00:36:44.956 clat percentiles (usec): 00:36:44.956 | 1.00th=[ 627], 5.00th=[ 685], 10.00th=[ 742], 20.00th=[ 816], 00:36:44.956 | 30.00th=[ 873], 40.00th=[ 930], 50.00th=[ 971], 60.00th=[ 996], 00:36:44.956 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1172], 00:36:44.956 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1319], 00:36:44.956 | 99.99th=[ 1319] 00:36:44.956 write: IOPS=817, BW=3269KiB/s (3347kB/s)(3272KiB/1001msec); 0 zone resets 00:36:44.956 slat (nsec): min=9277, max=93724, avg=26968.73, stdev=9356.22 00:36:44.956 clat (usec): min=210, max=858, avg=574.66, stdev=112.93 00:36:44.956 lat (usec): min=219, max=924, avg=601.63, stdev=116.78 00:36:44.956 clat percentiles (usec): 00:36:44.956 | 1.00th=[ 326], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 474], 00:36:44.956 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:36:44.956 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 750], 00:36:44.956 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 857], 99.95th=[ 857], 00:36:44.956 | 99.99th=[ 857] 00:36:44.956 bw ( KiB/s): min= 4096, max= 4096, per=36.94%, avg=4096.00, stdev= 0.00, samples=1 00:36:44.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:44.956 lat (usec) : 250=0.23%, 500=15.79%, 750=46.84%, 1000=22.11% 00:36:44.956 lat (msec) : 2=15.04% 00:36:44.956 cpu : usr=2.10%, sys=3.40%, ctx=1331, majf=0, minf=1 00:36:44.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.956 issued rwts: total=512,818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:44.956 job1: (groupid=0, jobs=1): err= 0: pid=1769029: Fri Dec 6 17:52:36 2024 00:36:44.956 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:36:44.956 slat (nsec): min=25642, max=26898, avg=26120.00, stdev=417.09 00:36:44.956 clat (usec): min=599, max=42078, avg=32824.26, stdev=17238.92 00:36:44.956 lat (usec): min=626, max=42104, avg=32850.38, stdev=17238.88 00:36:44.956 clat percentiles (usec): 00:36:44.956 | 1.00th=[ 603], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 1037], 00:36:44.956 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:36:44.956 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:44.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:44.956 | 99.99th=[42206] 00:36:44.956 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:36:44.956 slat (nsec): min=9534, max=55321, avg=31922.87, stdev=8099.57 00:36:44.956 clat (usec): min=216, max=832, avg=502.27, stdev=110.87 00:36:44.956 lat (usec): min=228, max=865, avg=534.19, stdev=113.19 00:36:44.956 clat percentiles (usec): 00:36:44.956 | 1.00th=[ 258], 5.00th=[ 318], 10.00th=[ 363], 20.00th=[ 412], 00:36:44.956 | 30.00th=[ 445], 40.00th=[ 474], 50.00th=[ 498], 60.00th=[ 523], 00:36:44.956 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 652], 95.00th=[ 693], 00:36:44.956 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 832], 99.95th=[ 832], 00:36:44.956 | 99.99th=[ 832] 00:36:44.956 bw ( KiB/s): min= 4096, max= 4096, per=36.94%, avg=4096.00, stdev= 0.00, samples=1 00:36:44.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:44.956 lat (usec) : 250=0.75%, 500=47.66%, 750=45.98%, 1000=2.06% 00:36:44.956 lat (msec) : 2=0.19%, 50=3.36% 00:36:44.956 cpu : usr=0.58%, sys=1.84%, ctx=537, majf=0, minf=1 00:36:44.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.956 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:44.956 job2: (groupid=0, jobs=1): err= 0: pid=1769030: Fri Dec 6 17:52:36 2024 00:36:44.956 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:36:44.956 slat (nsec): min=26385, max=27087, avg=26623.89, stdev=163.41 00:36:44.956 clat (usec): min=896, max=42585, avg=39572.20, stdev=9380.36 00:36:44.956 lat (usec): min=923, max=42612, avg=39598.82, stdev=9380.42 00:36:44.956 clat percentiles (usec): 00:36:44.956 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[40633], 20.00th=[41157], 00:36:44.956 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:36:44.956 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:36:44.956 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:44.956 | 99.99th=[42730] 00:36:44.956 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:36:44.956 slat (nsec): min=8851, max=65553, avg=31351.88, stdev=7606.33 00:36:44.956 clat (usec): min=152, max=857, avg=509.90, stdev=113.61 00:36:44.956 lat (usec): min=161, max=890, avg=541.25, stdev=116.03 00:36:44.956 clat percentiles (usec): 00:36:44.956 | 1.00th=[ 251], 5.00th=[ 302], 10.00th=[ 363], 20.00th=[ 408], 00:36:44.956 | 30.00th=[ 461], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 545], 00:36:44.956 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 676], 00:36:44.956 | 99.00th=[ 709], 99.50th=[ 758], 99.90th=[ 857], 99.95th=[ 857], 00:36:44.956 | 99.99th=[ 857] 00:36:44.956 bw ( KiB/s): min= 4096, max= 4096, per=36.94%, avg=4096.00, stdev= 0.00, samples=1 00:36:44.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:44.956 lat (usec) : 250=0.94%, 500=43.31%, 750=51.60%, 1000=0.75% 00:36:44.957 lat (msec) : 50=3.39% 00:36:44.957 cpu : usr=0.87%, sys=2.23%, ctx=531, majf=0, minf=1 00:36:44.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.957 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:44.957 job3: (groupid=0, jobs=1): err= 0: pid=1769031: Fri Dec 6 17:52:36 2024 00:36:44.957 read: IOPS=519, BW=2078KiB/s (2128kB/s)(2084KiB/1003msec) 00:36:44.957 slat (nsec): min=6886, max=46293, avg=23619.67, stdev=6718.79 00:36:44.957 clat (usec): min=216, max=41941, avg=1231.83, stdev=5014.28 00:36:44.957 lat (usec): min=230, max=41967, avg=1255.45, stdev=5014.57 00:36:44.957 clat percentiles (usec): 00:36:44.957 | 1.00th=[ 258], 5.00th=[ 326], 10.00th=[ 383], 20.00th=[ 498], 00:36:44.957 | 30.00th=[ 570], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 668], 00:36:44.957 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 791], 00:36:44.957 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:44.957 | 99.99th=[41681] 00:36:44.957 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:36:44.957 slat (nsec): min=9196, max=51240, avg=19824.78, stdev=11237.08 00:36:44.957 clat (usec): min=116, max=967, avg=311.76, stdev=190.32 00:36:44.957 lat (usec): min=126, max=999, avg=331.59, stdev=198.40 00:36:44.957 clat percentiles (usec): 00:36:44.957 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 129], 00:36:44.957 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 277], 60.00th=[ 347], 00:36:44.957 | 70.00th=[ 437], 80.00th=[ 498], 90.00th=[ 594], 95.00th=[ 644], 00:36:44.957 | 99.00th=[ 766], 99.50th=[ 807], 99.90th=[ 865], 99.95th=[ 971], 00:36:44.957 | 99.99th=[ 971] 00:36:44.957 bw ( KiB/s): min= 2440, max= 5752, per=36.94%, avg=4096.00, stdev=2341.94, samples=2 00:36:44.957 iops : min= 610, max= 1438, avg=1024.00, stdev=585.48, samples=2 00:36:44.957 lat (usec) : 250=31.65%, 500=28.35%, 750=36.12%, 1000=3.37% 00:36:44.957 lat (msec) : 50=0.52% 00:36:44.957 cpu : usr=1.10%, sys=4.09%, ctx=1545, majf=0, minf=1 00:36:44.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.957 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:44.957 00:36:44.957 Run status group 0 (all jobs): 00:36:44.957 READ: bw=4159KiB/s (4258kB/s), 73.5KiB/s-2078KiB/s (75.3kB/s-2128kB/s), io=4300KiB (4403kB), run=1001-1034msec 00:36:44.957 WRITE: bw=10.8MiB/s (11.4MB/s), 1981KiB/s-4084KiB/s (2028kB/s-4182kB/s), io=11.2MiB (11.7MB), run=1001-1034msec 00:36:44.957 00:36:44.957 Disk stats (read/write): 00:36:44.957 nvme0n1: ios=562/537, merge=0/0, ticks=616/304, in_queue=920, util=95.69% 00:36:44.957 nvme0n2: ios=66/512, merge=0/0, ticks=1535/235, in_queue=1770, util=97.04% 00:36:44.957 nvme0n3: ios=56/512, merge=0/0, ticks=647/200, in_queue=847, util=91.89% 00:36:44.957 nvme0n4: ios=516/1024, merge=0/0, ticks=431/307, in_queue=738, util=89.43% 00:36:44.957 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:44.957 [global] 00:36:44.957 thread=1 00:36:44.957 invalidate=1 00:36:44.957 rw=write 00:36:44.957 time_based=1 00:36:44.957 runtime=1 00:36:44.957 ioengine=libaio 00:36:44.957 direct=1 00:36:44.957 bs=4096 00:36:44.957 iodepth=128 00:36:44.957 norandommap=0 00:36:44.957 numjobs=1 00:36:44.957 00:36:44.957 verify_dump=1 00:36:44.957 verify_backlog=512 00:36:44.957 verify_state_save=0 00:36:44.957 do_verify=1 00:36:44.957 verify=crc32c-intel 00:36:44.957 [job0] 00:36:44.957 filename=/dev/nvme0n1 00:36:44.957 [job1] 00:36:44.957 filename=/dev/nvme0n2 00:36:44.957 [job2] 00:36:44.957 filename=/dev/nvme0n3 00:36:44.957 [job3] 00:36:44.957 filename=/dev/nvme0n4 00:36:44.957 Could not set queue depth (nvme0n1) 00:36:44.957 Could not set queue depth (nvme0n2) 00:36:44.957 Could not set queue depth (nvme0n3) 00:36:44.957 Could not set queue depth (nvme0n4) 00:36:45.218 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:45.218 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:45.218 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:45.218 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:45.218 fio-3.35 00:36:45.218 Starting 4 threads 00:36:46.617 00:36:46.617 job0: (groupid=0, jobs=1): err= 0: pid=1769235: Fri Dec 6 17:52:38 2024 00:36:46.617 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:36:46.617 slat (nsec): min=898, max=7763.2k, avg=92675.98, stdev=613386.24 00:36:46.617 clat (usec): min=5262, max=26161, avg=11827.06, stdev=2863.66 00:36:46.617 lat (usec): min=5269, max=28589, avg=11919.74, stdev=2912.46 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 5866], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9503], 00:36:46.617 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11207], 60.00th=[12387], 00:36:46.617 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15270], 95.00th=[17433], 00:36:46.617 | 99.00th=[20317], 99.50th=[21890], 99.90th=[26084], 99.95th=[26084], 00:36:46.617 | 99.99th=[26084] 00:36:46.617 write: IOPS=4892, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1005msec); 0 zone resets 00:36:46.617 slat (nsec): min=1715, max=12608k, avg=111438.73, stdev=550615.97 00:36:46.617 clat (usec): min=2715, max=48776, avg=14807.07, stdev=7738.36 00:36:46.617 lat (usec): min=4247, max=51618, avg=14918.51, stdev=7796.98 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8455], 00:36:46.617 | 30.00th=[ 8979], 40.00th=[10290], 50.00th=[11600], 60.00th=[13435], 00:36:46.617 | 70.00th=[19268], 80.00th=[21890], 90.00th=[26608], 95.00th=[29230], 00:36:46.617 | 99.00th=[36963], 99.50th=[43254], 99.90th=[45876], 99.95th=[46924], 00:36:46.617 | 99.99th=[49021] 00:36:46.617 bw ( KiB/s): min=19000, max=19312, per=23.37%, avg=19156.00, stdev=220.62, samples=2 00:36:46.617 iops : min= 4750, max= 4828, avg=4789.00, stdev=55.15, samples=2 00:36:46.617 lat (msec) : 4=0.01%, 10=32.12%, 20=52.92%, 50=14.95% 00:36:46.617 cpu : usr=3.29%, sys=5.28%, ctx=444, majf=0, minf=1 00:36:46.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:46.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:46.617 issued rwts: total=4608,4917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:46.617 job1: (groupid=0, jobs=1): err= 0: pid=1769236: Fri Dec 6 17:52:38 2024 00:36:46.617 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:36:46.617 slat (nsec): min=947, max=18330k, avg=81736.96, stdev=591758.28 00:36:46.617 clat (usec): min=1653, max=36742, avg=10646.37, stdev=6709.46 00:36:46.617 lat (usec): min=1656, max=36768, avg=10728.11, stdev=6752.49 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 3326], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 5669], 00:36:46.617 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 8717], 00:36:46.617 | 70.00th=[13042], 80.00th=[16450], 90.00th=[21103], 95.00th=[24511], 00:36:46.617 | 99.00th=[31065], 99.50th=[31851], 99.90th=[34866], 99.95th=[35390], 00:36:46.617 | 99.99th=[36963] 00:36:46.617 write: IOPS=6643, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:36:46.617 slat (nsec): min=1622, max=21693k, avg=63953.41, stdev=483554.54 00:36:46.617 clat (usec): min=1347, max=25848, avg=8061.76, stdev=4311.13 00:36:46.617 lat (usec): min=1356, max=25881, avg=8125.71, stdev=4347.63 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 2671], 5.00th=[ 3425], 10.00th=[ 4555], 20.00th=[ 5407], 00:36:46.617 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6194], 60.00th=[ 6849], 00:36:46.617 | 70.00th=[ 8094], 80.00th=[11469], 90.00th=[14222], 95.00th=[17695], 00:36:46.617 | 99.00th=[21627], 99.50th=[23725], 99.90th=[25297], 99.95th=[25297], 00:36:46.617 | 99.99th=[25822] 00:36:46.617 bw ( KiB/s): min=24576, max=28672, per=32.48%, avg=26624.00, stdev=2896.31, samples=2 00:36:46.617 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:36:46.617 lat (msec) : 2=0.25%, 4=5.05%, 10=65.02%, 20=23.29%, 50=6.39% 00:36:46.617 cpu : usr=3.59%, sys=6.29%, ctx=548, majf=0, minf=1 00:36:46.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:46.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:46.617 issued rwts: total=6656,6663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:46.617 job2: (groupid=0, jobs=1): err= 0: pid=1769237: Fri Dec 6 17:52:38 2024 00:36:46.617 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:36:46.617 slat (nsec): min=966, max=8238.0k, avg=91694.36, stdev=553561.36 00:36:46.617 clat (usec): min=5423, max=37716, avg=12030.00, stdev=4724.26 00:36:46.617 lat (usec): min=5427, max=39469, avg=12121.69, stdev=4753.14 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 6194], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8225], 00:36:46.617 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[12256], 00:36:46.617 | 70.00th=[13566], 80.00th=[15926], 90.00th=[18744], 95.00th=[20841], 00:36:46.617 | 99.00th=[26870], 99.50th=[28967], 99.90th=[37487], 99.95th=[37487], 00:36:46.617 | 99.99th=[37487] 00:36:46.617 write: IOPS=5246, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1004msec); 0 zone resets 00:36:46.617 slat (nsec): min=1637, max=10530k, avg=96420.51, stdev=491961.98 00:36:46.617 clat (usec): min=2885, max=48768, avg=12387.30, stdev=8038.16 00:36:46.617 lat (usec): min=4072, max=48774, avg=12483.72, stdev=8096.03 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7635], 00:36:46.617 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[10421], 00:36:46.617 | 70.00th=[11994], 80.00th=[15664], 90.00th=[22676], 95.00th=[32637], 00:36:46.617 | 99.00th=[41681], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:36:46.617 | 99.99th=[49021] 00:36:46.617 bw ( KiB/s): min=16544, max=24576, per=25.08%, avg=20560.00, stdev=5679.48, samples=2 00:36:46.617 iops : min= 4136, max= 6144, avg=5140.00, stdev=1419.87, samples=2 00:36:46.617 lat (msec) : 4=0.02%, 10=51.87%, 20=37.84%, 50=10.27% 00:36:46.617 cpu : usr=3.29%, sys=4.19%, ctx=617, majf=0, minf=1 00:36:46.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:46.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:46.617 issued rwts: total=5120,5267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:46.617 job3: (groupid=0, jobs=1): err= 0: pid=1769238: Fri Dec 6 17:52:38 2024 00:36:46.617 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:36:46.617 slat (nsec): min=974, max=9403.3k, avg=114235.78, stdev=727295.58 00:36:46.617 clat (usec): min=5642, max=30779, avg=14660.57, stdev=6227.83 00:36:46.617 lat (usec): min=5649, max=30784, avg=14774.81, stdev=6266.74 00:36:46.617 clat percentiles (usec): 00:36:46.617 | 1.00th=[ 5932], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9110], 00:36:46.617 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[12911], 60.00th=[15401], 00:36:46.617 | 70.00th=[18482], 80.00th=[20317], 90.00th=[23987], 95.00th=[27132], 00:36:46.617 | 99.00th=[29754], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:36:46.617 | 99.99th=[30802] 00:36:46.617 write: IOPS=3731, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1005msec); 0 zone resets 00:36:46.617 slat (nsec): min=1676, max=42000k, avg=148845.40, stdev=1527545.27 00:36:46.617 clat (msec): min=4, max=185, avg=16.33, stdev=16.91 00:36:46.617 lat (msec): min=4, max=185, avg=16.48, stdev=17.14 00:36:46.617 clat percentiles (msec): 00:36:46.617 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 8], 00:36:46.617 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:36:46.617 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 43], 00:36:46.617 | 99.00th=[ 89], 99.50th=[ 127], 99.90th=[ 186], 99.95th=[ 186], 00:36:46.617 | 99.99th=[ 186] 00:36:46.617 bw ( KiB/s): min= 8192, max=20832, per=17.70%, avg=14512.00, stdev=8937.83, samples=2 00:36:46.617 iops : min= 2048, max= 5208, avg=3628.00, stdev=2234.46, samples=2 00:36:46.617 lat (msec) : 10=37.17%, 20=42.84%, 50=18.27%, 100=1.28%, 250=0.44% 00:36:46.618 cpu : usr=3.09%, sys=3.98%, ctx=257, majf=0, minf=1 00:36:46.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:36:46.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:46.618 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:46.618 00:36:46.618 Run status group 0 (all jobs): 00:36:46.618 READ: bw=77.6MiB/s (81.4MB/s), 13.9MiB/s-25.9MiB/s (14.6MB/s-27.2MB/s), io=78.0MiB (81.8MB), run=1003-1005msec 00:36:46.618 WRITE: bw=80.1MiB/s (83.9MB/s), 14.6MiB/s-25.9MiB/s (15.3MB/s-27.2MB/s), io=80.5MiB (84.4MB), run=1003-1005msec 00:36:46.618 00:36:46.618 Disk stats (read/write): 00:36:46.618 nvme0n1: ios=4075/4096, merge=0/0, ticks=23009/27591, in_queue=50600, util=87.58% 00:36:46.618 nvme0n2: ios=5536/5632, merge=0/0, ticks=33249/24333, in_queue=57582, util=98.37% 00:36:46.618 nvme0n3: ios=4641/4751, merge=0/0, ticks=17179/17821, in_queue=35000, util=96.94% 00:36:46.618 nvme0n4: ios=2595/2700, merge=0/0, ticks=14627/13373, in_queue=28000, util=97.76% 00:36:46.618 17:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:46.618 [global] 00:36:46.618 thread=1 00:36:46.618 invalidate=1 00:36:46.618 rw=randwrite 00:36:46.618 time_based=1 00:36:46.618 runtime=1 00:36:46.618 ioengine=libaio 00:36:46.618 direct=1 00:36:46.618 bs=4096 00:36:46.618 iodepth=128 00:36:46.618 norandommap=0 00:36:46.618 numjobs=1 00:36:46.618 00:36:46.618 verify_dump=1 00:36:46.618 verify_backlog=512 00:36:46.618 verify_state_save=0 00:36:46.618 do_verify=1 00:36:46.618 verify=crc32c-intel 00:36:46.618 [job0] 00:36:46.618 filename=/dev/nvme0n1 00:36:46.618 [job1] 00:36:46.618 filename=/dev/nvme0n2 00:36:46.618 [job2] 00:36:46.618 filename=/dev/nvme0n3 00:36:46.618 [job3] 00:36:46.618 filename=/dev/nvme0n4 00:36:46.618 Could not set queue depth (nvme0n1) 00:36:46.618 Could not set queue depth (nvme0n2) 00:36:46.618 Could not set queue depth (nvme0n3) 00:36:46.618 Could not set queue depth (nvme0n4) 00:36:46.877 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:46.877 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:46.877 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:46.877 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:46.877 fio-3.35 00:36:46.877 Starting 4 threads 00:36:48.286 00:36:48.286 job0: (groupid=0, jobs=1): err= 0: pid=1769448: Fri Dec 6 17:52:39 2024 00:36:48.286 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:36:48.286 slat (nsec): min=931, max=8480.7k, avg=83160.78, stdev=496170.09 00:36:48.286 clat (usec): min=2546, max=40472, avg=10073.21, stdev=4768.03 00:36:48.286 lat (usec): min=2552, max=40479, avg=10156.37, stdev=4808.07 00:36:48.286 clat percentiles (usec): 00:36:48.286 | 1.00th=[ 3359], 5.00th=[ 6325], 10.00th=[ 7242], 20.00th=[ 8094], 00:36:48.286 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:36:48.286 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[13304], 95.00th=[19006], 00:36:48.286 | 99.00th=[33162], 99.50th=[36963], 99.90th=[37487], 99.95th=[40633], 00:36:48.286 | 99.99th=[40633] 00:36:48.286 write: IOPS=5543, BW=21.7MiB/s (22.7MB/s)(21.7MiB/1002msec); 0 zone resets 00:36:48.286 slat (nsec): min=1604, max=45722k, avg=94652.34, stdev=745784.05 00:36:48.286 clat (usec): min=436, max=57214, avg=13569.14, stdev=10697.43 00:36:48.286 lat (usec): min=543, max=57217, avg=13663.79, stdev=10755.57 00:36:48.286 clat percentiles (usec): 00:36:48.286 | 1.00th=[ 1565], 5.00th=[ 4113], 10.00th=[ 5276], 20.00th=[ 6718], 00:36:48.286 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 9372], 00:36:48.286 | 70.00th=[13960], 80.00th=[23987], 90.00th=[29492], 95.00th=[32900], 00:36:48.286 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[57410], 00:36:48.286 | 99.99th=[57410] 00:36:48.286 bw ( KiB/s): min=18848, max=24576, per=25.06%, avg=21712.00, stdev=4050.31, samples=2 00:36:48.286 iops : min= 4712, max= 6144, avg=5428.00, stdev=1012.58, samples=2 00:36:48.286 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.07% 00:36:48.286 lat (msec) : 2=0.78%, 4=2.30%, 10=65.59%, 20=16.49%, 50=13.55% 00:36:48.286 lat (msec) : 100=1.19% 00:36:48.286 cpu : usr=3.60%, sys=4.80%, ctx=581, majf=0, minf=1 00:36:48.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:36:48.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:48.286 issued rwts: total=5120,5555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:48.286 job1: (groupid=0, jobs=1): err= 0: pid=1769449: Fri Dec 6 17:52:39 2024 00:36:48.286 read: IOPS=4682, BW=18.3MiB/s (19.2MB/s)(19.1MiB/1043msec) 00:36:48.286 slat (nsec): min=886, max=13653k, avg=105979.29, stdev=734458.71 00:36:48.286 clat (usec): min=6975, max=65879, avg=14689.71, stdev=7811.60 00:36:48.286 lat (usec): min=6977, max=65883, avg=14795.69, stdev=7860.90 00:36:48.286 clat percentiles (usec): 00:36:48.286 | 1.00th=[ 7242], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9896], 00:36:48.286 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11994], 60.00th=[13435], 00:36:48.286 | 70.00th=[14877], 80.00th=[19268], 90.00th=[23200], 95.00th=[27395], 00:36:48.286 | 99.00th=[50070], 99.50th=[50594], 99.90th=[65799], 99.95th=[65799], 00:36:48.286 | 99.99th=[65799] 00:36:48.286 write: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1043msec); 0 zone resets 00:36:48.286 slat (nsec): min=1485, max=8734.1k, avg=88669.79, stdev=484608.55 00:36:48.286 clat (usec): min=4293, max=37508, avg=11807.94, stdev=5254.44 00:36:48.286 lat (usec): min=4301, max=37516, avg=11896.61, stdev=5300.38 00:36:48.286 clat percentiles (usec): 00:36:48.286 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8356], 00:36:48.286 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[10421], 00:36:48.286 | 70.00th=[13435], 80.00th=[14746], 90.00th=[19006], 95.00th=[23987], 00:36:48.286 | 99.00th=[32113], 99.50th=[34341], 99.90th=[35390], 99.95th=[37487], 00:36:48.286 | 99.99th=[37487] 00:36:48.286 bw ( KiB/s): min=17704, max=23256, per=23.64%, avg=20480.00, stdev=3925.86, samples=2 00:36:48.286 iops : min= 4426, max= 5814, avg=5120.00, stdev=981.46, samples=2 00:36:48.286 lat (msec) : 10=39.91%, 20=48.79%, 50=10.81%, 100=0.49% 00:36:48.286 cpu : usr=3.36%, sys=5.76%, ctx=428, majf=0, minf=1 00:36:48.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:48.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:48.286 issued rwts: total=4884,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:48.286 job2: (groupid=0, jobs=1): err= 0: pid=1769450: Fri Dec 6 17:52:39 2024 00:36:48.286 read: IOPS=8276, BW=32.3MiB/s (33.9MB/s)(32.5MiB/1005msec) 00:36:48.286 slat (nsec): min=966, max=9471.4k, avg=62197.93, stdev=472787.17 00:36:48.286 clat (usec): min=1180, max=21966, avg=8172.99, stdev=2548.64 00:36:48.287 lat (usec): min=3044, max=21980, avg=8235.19, stdev=2576.02 00:36:48.287 clat percentiles (usec): 00:36:48.287 | 1.00th=[ 4080], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6128], 00:36:48.287 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8291], 00:36:48.287 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[11863], 95.00th=[12911], 00:36:48.287 | 99.00th=[15664], 99.50th=[17433], 99.90th=[17695], 99.95th=[21365], 00:36:48.287 | 99.99th=[21890] 00:36:48.287 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:36:48.287 slat (nsec): min=1563, max=7093.5k, avg=50789.95, stdev=358160.88 00:36:48.287 clat (usec): min=1144, max=17104, avg=6833.73, stdev=2049.20 00:36:48.287 lat (usec): min=1154, max=17136, avg=6884.52, stdev=2062.09 00:36:48.287 clat percentiles (usec): 00:36:48.287 | 1.00th=[ 2704], 5.00th=[ 3752], 10.00th=[ 4080], 20.00th=[ 5342], 00:36:48.287 | 30.00th=[ 6063], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:36:48.287 | 70.00th=[ 7701], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[10290], 00:36:48.287 | 99.00th=[14484], 99.50th=[15139], 99.90th=[15139], 99.95th=[15401], 00:36:48.287 | 99.99th=[17171] 00:36:48.287 bw ( KiB/s): min=32768, max=36848, per=40.18%, avg=34808.00, stdev=2885.00, samples=2 00:36:48.287 iops : min= 8192, max= 9212, avg=8702.00, stdev=721.25, samples=2 00:36:48.287 lat (msec) : 2=0.13%, 4=4.64%, 10=82.45%, 20=12.74%, 50=0.04% 00:36:48.287 cpu : usr=6.08%, sys=8.37%, ctx=640, majf=0, minf=2 00:36:48.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:48.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:48.287 issued rwts: total=8318,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:48.287 job3: (groupid=0, jobs=1): err= 0: pid=1769451: Fri Dec 6 17:52:39 2024 00:36:48.287 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:36:48.287 slat (nsec): min=980, max=18726k, avg=176359.23, stdev=1101757.20 00:36:48.287 clat (usec): min=2837, max=63418, avg=23797.53, stdev=10779.14 00:36:48.287 lat (usec): min=2844, max=63839, avg=23973.89, stdev=10811.36 00:36:48.287 clat percentiles (usec): 00:36:48.287 | 1.00th=[ 4178], 5.00th=[ 8848], 10.00th=[12780], 20.00th=[15139], 00:36:48.287 | 30.00th=[17433], 40.00th=[18744], 50.00th=[21627], 60.00th=[23725], 00:36:48.287 | 70.00th=[28443], 80.00th=[32113], 90.00th=[39584], 95.00th=[43779], 00:36:48.287 | 99.00th=[56361], 99.50th=[60031], 99.90th=[63177], 99.95th=[63177], 00:36:48.287 | 99.99th=[63177] 00:36:48.287 write: IOPS=3198, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1003msec); 0 zone resets 00:36:48.287 slat (nsec): min=1585, max=7991.7k, avg=132854.68, stdev=601175.18 00:36:48.287 clat (usec): min=523, max=54099, avg=16695.24, stdev=10924.40 00:36:48.287 lat (usec): min=557, max=54105, avg=16828.09, stdev=10987.02 00:36:48.287 clat percentiles (usec): 00:36:48.287 | 1.00th=[ 848], 5.00th=[ 3163], 10.00th=[ 6849], 20.00th=[ 9372], 00:36:48.287 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13829], 60.00th=[14615], 00:36:48.287 | 70.00th=[17171], 80.00th=[23462], 90.00th=[31589], 95.00th=[42730], 00:36:48.287 | 99.00th=[51643], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:36:48.287 | 99.99th=[54264] 00:36:48.287 bw ( KiB/s): min= 8264, max=16384, per=14.23%, avg=12324.00, stdev=5741.71, samples=2 00:36:48.287 iops : min= 2066, max= 4096, avg=3081.00, stdev=1435.43, samples=2 00:36:48.287 lat (usec) : 750=0.41%, 1000=0.37% 00:36:48.287 lat (msec) : 2=0.73%, 4=1.32%, 10=11.88%, 20=45.06%, 50=37.53% 00:36:48.287 lat (msec) : 100=2.69% 00:36:48.287 cpu : usr=2.30%, sys=3.99%, ctx=378, majf=0, minf=1 00:36:48.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:36:48.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:48.287 issued rwts: total=3072,3208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:48.287 00:36:48.287 Run status group 0 (all jobs): 00:36:48.287 READ: bw=80.1MiB/s (84.0MB/s), 12.0MiB/s-32.3MiB/s (12.5MB/s-33.9MB/s), io=83.6MiB (87.6MB), run=1002-1043msec 00:36:48.287 WRITE: bw=84.6MiB/s (88.7MB/s), 12.5MiB/s-33.8MiB/s (13.1MB/s-35.5MB/s), io=88.2MiB (92.5MB), run=1002-1043msec 00:36:48.287 00:36:48.287 Disk stats (read/write): 00:36:48.287 nvme0n1: ios=4659/4695, merge=0/0, ticks=28663/38136, in_queue=66799, util=96.69% 00:36:48.287 nvme0n2: ios=4140/4104, merge=0/0, ticks=27254/23177, in_queue=50431, util=95.72% 00:36:48.287 nvme0n3: ios=6673/7168, merge=0/0, ticks=53294/47994, in_queue=101288, util=88.41% 00:36:48.287 nvme0n4: ios=2648/3072, merge=0/0, ticks=18932/14707, in_queue=33639, util=97.12% 00:36:48.287 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:48.287 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1769463 00:36:48.287 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:48.287 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:48.287 [global] 00:36:48.287 thread=1 00:36:48.287 invalidate=1 00:36:48.287 rw=read 00:36:48.287 time_based=1 00:36:48.287 runtime=10 00:36:48.287 ioengine=libaio 00:36:48.287 direct=1 00:36:48.287 bs=4096 00:36:48.287 iodepth=1 00:36:48.287 norandommap=1 00:36:48.287 numjobs=1 00:36:48.287 00:36:48.287 [job0] 00:36:48.287 filename=/dev/nvme0n1 00:36:48.287 [job1] 00:36:48.287 filename=/dev/nvme0n2 00:36:48.287 [job2] 00:36:48.287 filename=/dev/nvme0n3 00:36:48.287 [job3] 00:36:48.287 filename=/dev/nvme0n4 00:36:48.287 Could not set queue depth (nvme0n1) 00:36:48.287 Could not set queue depth (nvme0n2) 00:36:48.287 Could not set queue depth (nvme0n3) 00:36:48.287 Could not set queue depth (nvme0n4) 00:36:48.547 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:48.547 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:48.547 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:48.547 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:48.547 fio-3.35 00:36:48.547 Starting 4 threads 00:36:51.091 17:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:51.351 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:51.351 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=630784, buflen=4096 00:36:51.351 fio: pid=1769654, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:51.351 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9191424, buflen=4096 00:36:51.351 fio: pid=1769653, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:51.351 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:51.351 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:51.611 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14594048, buflen=4096 00:36:51.611 fio: pid=1769651, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:51.611 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:51.611 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:51.875 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:51.875 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:51.875 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4747264, buflen=4096 00:36:51.875 fio: pid=1769652, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:51.875 00:36:51.875 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1769651: Fri Dec 6 17:52:43 2024 00:36:51.875 read: IOPS=1208, BW=4831KiB/s (4947kB/s)(13.9MiB/2950msec) 00:36:51.875 slat (usec): min=2, max=21107, avg=33.27, stdev=431.67 00:36:51.875 clat (usec): min=311, max=41598, avg=782.47, stdev=690.14 00:36:51.875 lat (usec): min=337, max=41603, avg=815.74, stdev=813.58 00:36:51.875 clat percentiles (usec): 00:36:51.875 | 1.00th=[ 529], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 701], 00:36:51.875 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 799], 00:36:51.875 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:36:51.875 | 99.00th=[ 1074], 99.50th=[ 1139], 99.90th=[ 1237], 99.95th=[ 1303], 00:36:51.875 | 99.99th=[41681] 00:36:51.875 bw ( KiB/s): min= 4856, max= 5176, per=54.77%, avg=4961.60, stdev=128.17, samples=5 00:36:51.875 iops : min= 1214, max= 1294, avg=1240.40, stdev=32.04, samples=5 00:36:51.875 lat (usec) : 500=0.67%, 750=35.41%, 1000=62.57% 00:36:51.875 lat (msec) : 2=1.29%, 50=0.03% 00:36:51.875 cpu : usr=1.05%, sys=2.88%, ctx=3570, majf=0, minf=1 00:36:51.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.875 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.875 issued rwts: total=3564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:51.875 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1769652: Fri Dec 6 17:52:43 2024 00:36:51.875 read: IOPS=368, BW=1475KiB/s (1510kB/s)(4636KiB/3144msec) 00:36:51.875 slat (usec): min=2, max=2736, avg=13.27, stdev=80.74 00:36:51.875 clat (usec): min=339, max=41921, avg=2678.21, stdev=8728.54 00:36:51.875 lat (usec): min=366, max=43928, avg=2691.47, stdev=8743.73 00:36:51.875 clat percentiles (usec): 00:36:51.875 | 1.00th=[ 453], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 627], 00:36:51.875 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 717], 00:36:51.875 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 1385], 00:36:51.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:51.875 | 99.99th=[41681] 00:36:51.875 bw ( KiB/s): min= 91, max= 5176, per=17.00%, avg=1540.50, stdev=2286.13, samples=6 00:36:51.875 iops : min= 22, max= 1294, avg=385.00, stdev=571.63, samples=6 00:36:51.875 lat (usec) : 500=3.36%, 750=68.10%, 1000=23.02% 00:36:51.875 lat (msec) : 2=0.52%, 50=4.91% 00:36:51.875 cpu : usr=0.16%, sys=0.45%, ctx=1163, majf=0, minf=2 00:36:51.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.875 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.875 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:51.875 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1769653: Fri Dec 6 17:52:43 2024 00:36:51.875 read: IOPS=809, BW=3237KiB/s (3315kB/s)(8976KiB/2773msec) 00:36:51.875 slat (usec): min=7, max=13331, avg=36.14, stdev=361.40 00:36:51.875 clat (usec): min=643, max=1508, avg=1181.60, stdev=110.74 00:36:51.875 lat (usec): min=668, max=14518, avg=1217.74, stdev=378.25 00:36:51.875 clat percentiles (usec): 00:36:51.875 | 1.00th=[ 824], 5.00th=[ 963], 10.00th=[ 1037], 20.00th=[ 1106], 00:36:51.875 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1221], 00:36:51.875 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1336], 00:36:51.875 | 99.00th=[ 1385], 99.50th=[ 1401], 99.90th=[ 1467], 99.95th=[ 1483], 00:36:51.875 | 99.99th=[ 1516] 00:36:51.875 bw ( KiB/s): min= 3216, max= 3352, per=36.28%, avg=3286.40, stdev=63.09, samples=5 00:36:51.875 iops : min= 804, max= 838, avg=821.60, stdev=15.77, samples=5 00:36:51.875 lat (usec) : 750=0.49%, 1000=6.24% 00:36:51.875 lat (msec) : 2=93.23% 00:36:51.875 cpu : usr=0.69%, sys=2.67%, ctx=2247, majf=0, minf=2 00:36:51.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.875 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.875 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:51.875 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1769654: Fri Dec 6 17:52:43 2024 00:36:51.875 read: IOPS=59, BW=235KiB/s (240kB/s)(616KiB/2623msec) 00:36:51.875 slat (nsec): min=7669, max=43723, avg=25337.24, stdev=3083.01 00:36:51.875 clat (usec): min=471, max=42401, avg=16860.09, stdev=20021.10 00:36:51.875 lat (usec): min=513, max=42427, avg=16885.42, stdev=20020.92 00:36:51.875 clat percentiles (usec): 00:36:51.875 | 1.00th=[ 510], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 881], 00:36:51.875 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1029], 60.00th=[ 1123], 00:36:51.875 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:51.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:51.875 | 99.99th=[42206] 00:36:51.875 bw ( KiB/s): min= 120, max= 424, per=2.66%, avg=241.60, stdev=120.05, samples=5 00:36:51.875 iops : min= 30, max= 106, avg=60.40, stdev=30.01, samples=5 00:36:51.875 lat (usec) : 500=0.65%, 750=6.45%, 1000=35.48% 00:36:51.876 lat (msec) : 2=18.06%, 50=38.71% 00:36:51.876 cpu : usr=0.11%, sys=0.11%, ctx=155, majf=0, minf=2 00:36:51.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.876 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.876 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:51.876 00:36:51.876 Run status group 0 (all jobs): 00:36:51.876 READ: bw=9059KiB/s (9276kB/s), 235KiB/s-4831KiB/s (240kB/s-4947kB/s), io=27.8MiB (29.2MB), run=2623-3144msec 00:36:51.876 00:36:51.876 Disk stats (read/write): 00:36:51.876 nvme0n1: ios=3458/0, merge=0/0, ticks=2628/0, in_queue=2628, util=92.85% 00:36:51.876 nvme0n2: ios=1189/0, merge=0/0, ticks=3217/0, in_queue=3217, util=99.60% 00:36:51.876 nvme0n3: ios=2115/0, merge=0/0, ticks=2419/0, in_queue=2419, util=95.90% 00:36:51.876 nvme0n4: ios=153/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.41% 00:36:51.876 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:51.876 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:52.136 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:52.136 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:52.397 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:52.397 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:52.397 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:52.397 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1769463 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:52.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:52.659 nvmf hotplug test: fio failed as expected 00:36:52.659 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.921 rmmod nvme_tcp 00:36:52.921 rmmod nvme_fabrics 00:36:52.921 rmmod nvme_keyring 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1768496 ']' 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1768496 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1768496 ']' 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1768496 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:52.921 17:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768496 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768496' 00:36:53.181 killing process with pid 1768496 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1768496 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1768496 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.181 17:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:55.721 00:36:55.721 real 0m28.005s 00:36:55.721 user 2m16.668s 00:36:55.721 sys 0m11.810s 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:55.721 ************************************ 00:36:55.721 END TEST nvmf_fio_target 00:36:55.721 ************************************ 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:55.721 ************************************ 00:36:55.721 START TEST nvmf_bdevio 00:36:55.721 ************************************ 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:55.721 * Looking for test storage... 00:36:55.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.721 --rc genhtml_branch_coverage=1 00:36:55.721 --rc genhtml_function_coverage=1 00:36:55.721 --rc genhtml_legend=1 00:36:55.721 --rc geninfo_all_blocks=1 00:36:55.721 --rc geninfo_unexecuted_blocks=1 00:36:55.721 00:36:55.721 ' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.721 --rc genhtml_branch_coverage=1 00:36:55.721 --rc genhtml_function_coverage=1 00:36:55.721 --rc genhtml_legend=1 00:36:55.721 --rc geninfo_all_blocks=1 00:36:55.721 --rc geninfo_unexecuted_blocks=1 00:36:55.721 00:36:55.721 ' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.721 --rc genhtml_branch_coverage=1 00:36:55.721 --rc genhtml_function_coverage=1 00:36:55.721 --rc genhtml_legend=1 00:36:55.721 --rc geninfo_all_blocks=1 00:36:55.721 --rc geninfo_unexecuted_blocks=1 00:36:55.721 00:36:55.721 ' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:55.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.721 --rc genhtml_branch_coverage=1 00:36:55.721 --rc genhtml_function_coverage=1 00:36:55.721 --rc genhtml_legend=1 00:36:55.721 --rc geninfo_all_blocks=1 00:36:55.721 --rc geninfo_unexecuted_blocks=1 00:36:55.721 00:36:55.721 ' 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:55.721 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:55.722 17:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:02.302 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.302 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:02.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:02.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:02.303 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:02.303 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:02.303 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:02.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:37:02.564 00:37:02.564 --- 10.0.0.2 ping statistics --- 00:37:02.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.564 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:37:02.564 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:37:02.564 00:37:02.564 --- 10.0.0.1 ping statistics --- 00:37:02.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.564 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:02.565 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1772161 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1772161 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1772161 ']' 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.826 17:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:02.826 [2024-12-06 17:52:54.733057] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:02.826 [2024-12-06 17:52:54.734030] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:37:02.826 [2024-12-06 17:52:54.734073] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.826 [2024-12-06 17:52:54.827766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:02.826 [2024-12-06 17:52:54.864113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.826 [2024-12-06 17:52:54.864145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.826 [2024-12-06 17:52:54.864153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.826 [2024-12-06 17:52:54.864160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.826 [2024-12-06 17:52:54.864166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.826 [2024-12-06 17:52:54.865634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:02.826 [2024-12-06 17:52:54.865783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:02.826 [2024-12-06 17:52:54.866020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:02.826 [2024-12-06 17:52:54.866021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:03.086 [2024-12-06 17:52:54.922734] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:03.086 [2024-12-06 17:52:54.924224] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:03.086 [2024-12-06 17:52:54.924347] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:03.086 [2024-12-06 17:52:54.925185] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:03.086 [2024-12-06 17:52:54.925220] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:03.657 [2024-12-06 17:52:55.562916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:03.657 Malloc0 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:03.657 [2024-12-06 17:52:55.651147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:03.657 { 00:37:03.657 "params": { 00:37:03.657 "name": "Nvme$subsystem", 00:37:03.657 "trtype": "$TEST_TRANSPORT", 00:37:03.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:03.657 "adrfam": "ipv4", 00:37:03.657 "trsvcid": "$NVMF_PORT", 00:37:03.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:03.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:03.657 "hdgst": ${hdgst:-false}, 00:37:03.657 "ddgst": ${ddgst:-false} 00:37:03.657 }, 00:37:03.657 "method": "bdev_nvme_attach_controller" 00:37:03.657 } 00:37:03.657 EOF 00:37:03.657 )") 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:03.657 17:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:03.657 "params": { 00:37:03.657 "name": "Nvme1", 00:37:03.657 "trtype": "tcp", 00:37:03.657 "traddr": "10.0.0.2", 00:37:03.657 "adrfam": "ipv4", 00:37:03.657 "trsvcid": "4420", 00:37:03.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:03.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:03.657 "hdgst": false, 00:37:03.657 "ddgst": false 00:37:03.657 }, 00:37:03.657 "method": "bdev_nvme_attach_controller" 00:37:03.657 }' 00:37:03.657 [2024-12-06 17:52:55.706656] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:37:03.658 [2024-12-06 17:52:55.706711] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772197 ] 00:37:03.918 [2024-12-06 17:52:55.795124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:03.918 [2024-12-06 17:52:55.836473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.918 [2024-12-06 17:52:55.836621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.918 [2024-12-06 17:52:55.836621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:03.918 I/O targets: 00:37:03.918 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:03.918 00:37:03.918 00:37:03.918 CUnit - A unit testing framework for C - Version 2.1-3 00:37:03.918 http://cunit.sourceforge.net/ 00:37:03.918 00:37:03.918 00:37:03.918 Suite: bdevio tests on: Nvme1n1 00:37:04.178 Test: blockdev write read block ...passed 00:37:04.178 Test: blockdev write zeroes read block ...passed 00:37:04.178 Test: blockdev write zeroes read no split ...passed 00:37:04.178 Test: blockdev write zeroes read split ...passed 00:37:04.178 Test: blockdev write zeroes read split partial ...passed 00:37:04.178 Test: blockdev reset ...[2024-12-06 17:52:56.149576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:04.178 [2024-12-06 17:52:56.149684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x896580 (9): Bad file descriptor 00:37:04.178 [2024-12-06 17:52:56.156452] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:04.178 passed 00:37:04.178 Test: blockdev write read 8 blocks ...passed 00:37:04.178 Test: blockdev write read size > 128k ...passed 00:37:04.178 Test: blockdev write read invalid size ...passed 00:37:04.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:04.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:04.178 Test: blockdev write read max offset ...passed 00:37:04.439 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:04.439 Test: blockdev writev readv 8 blocks ...passed 00:37:04.439 Test: blockdev writev readv 30 x 1block ...passed 00:37:04.439 Test: blockdev writev readv block ...passed 00:37:04.439 Test: blockdev writev readv size > 128k ...passed 00:37:04.439 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:04.439 Test: blockdev comparev and writev ...[2024-12-06 17:52:56.377674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.377726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.378174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.378188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.378197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.378583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.378595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.378609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.378617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.379061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.379073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.379087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:04.439 [2024-12-06 17:52:56.379095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:04.439 passed 00:37:04.439 Test: blockdev nvme passthru rw ...passed 00:37:04.439 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:52:56.463111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:04.439 [2024-12-06 17:52:56.463126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.463344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:04.439 [2024-12-06 17:52:56.463355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.463576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:04.439 [2024-12-06 17:52:56.463587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:04.439 [2024-12-06 17:52:56.463805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:04.439 [2024-12-06 17:52:56.463816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:04.439 passed 00:37:04.439 Test: blockdev nvme admin passthru ...passed 00:37:04.700 Test: blockdev copy ...passed 00:37:04.700 00:37:04.700 Run Summary: Type Total Ran Passed Failed Inactive 00:37:04.700 suites 1 1 n/a 0 0 00:37:04.700 tests 23 23 23 0 0 00:37:04.700 asserts 152 152 152 0 n/a 00:37:04.700 00:37:04.700 Elapsed time = 1.096 seconds 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.700 rmmod nvme_tcp 00:37:04.700 rmmod nvme_fabrics 00:37:04.700 rmmod nvme_keyring 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1772161 ']' 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1772161 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1772161 ']' 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1772161 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.700 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1772161 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1772161' 00:37:04.962 killing process with pid 1772161 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1772161 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1772161 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.962 17:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.502 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.502 00:37:07.502 real 0m11.768s 00:37:07.502 user 0m8.835s 00:37:07.502 sys 0m6.159s 00:37:07.502 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.502 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:07.502 ************************************ 00:37:07.502 END TEST nvmf_bdevio 00:37:07.502 ************************************ 00:37:07.502 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:07.502 00:37:07.502 real 4m57.760s 00:37:07.502 user 10m16.959s 00:37:07.502 sys 2m4.004s 00:37:07.502 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.502 17:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:07.502 ************************************ 00:37:07.502 END TEST nvmf_target_core_interrupt_mode 00:37:07.502 ************************************ 00:37:07.502 17:52:59 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:07.502 17:52:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:07.502 17:52:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.502 17:52:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:07.502 ************************************ 00:37:07.502 START TEST nvmf_interrupt 00:37:07.502 ************************************ 00:37:07.502 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:07.502 * Looking for test storage... 00:37:07.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.502 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.503 --rc genhtml_branch_coverage=1 00:37:07.503 --rc genhtml_function_coverage=1 00:37:07.503 --rc genhtml_legend=1 00:37:07.503 --rc geninfo_all_blocks=1 00:37:07.503 --rc geninfo_unexecuted_blocks=1 00:37:07.503 00:37:07.503 ' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.503 --rc genhtml_branch_coverage=1 00:37:07.503 --rc genhtml_function_coverage=1 00:37:07.503 --rc genhtml_legend=1 00:37:07.503 --rc geninfo_all_blocks=1 00:37:07.503 --rc geninfo_unexecuted_blocks=1 00:37:07.503 00:37:07.503 ' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.503 --rc genhtml_branch_coverage=1 00:37:07.503 --rc genhtml_function_coverage=1 00:37:07.503 --rc genhtml_legend=1 00:37:07.503 --rc geninfo_all_blocks=1 00:37:07.503 --rc geninfo_unexecuted_blocks=1 00:37:07.503 00:37:07.503 ' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:07.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.503 --rc genhtml_branch_coverage=1 00:37:07.503 --rc genhtml_function_coverage=1 00:37:07.503 --rc genhtml_legend=1 00:37:07.503 --rc geninfo_all_blocks=1 00:37:07.503 --rc geninfo_unexecuted_blocks=1 00:37:07.503 00:37:07.503 ' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:07.503 17:52:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:15.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:15.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:15.644 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:15.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.644 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:15.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:37:15.645 00:37:15.645 --- 10.0.0.2 ping statistics --- 00:37:15.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.645 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:15.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:37:15.645 00:37:15.645 --- 10.0.0.1 ping statistics --- 00:37:15.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.645 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1774673 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1774673 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1774673 ']' 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.645 17:53:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 [2024-12-06 17:53:06.617409] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:15.645 [2024-12-06 17:53:06.618383] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:37:15.645 [2024-12-06 17:53:06.618421] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:15.645 [2024-12-06 17:53:06.711200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:15.645 [2024-12-06 17:53:06.746586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:15.645 [2024-12-06 17:53:06.746619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:15.645 [2024-12-06 17:53:06.746627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:15.645 [2024-12-06 17:53:06.746634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:15.645 [2024-12-06 17:53:06.746644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:15.645 [2024-12-06 17:53:06.747718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.645 [2024-12-06 17:53:06.747908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.645 [2024-12-06 17:53:06.804002] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:15.645 [2024-12-06 17:53:06.804569] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:15.645 [2024-12-06 17:53:06.804894] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:15.645 5000+0 records in 00:37:15.645 5000+0 records out 00:37:15.645 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0183363 s, 558 MB/s 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 AIO0 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 [2024-12-06 17:53:07.516722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:15.645 [2024-12-06 17:53:07.561008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1774673 0 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1774673 0 idle 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:15.645 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:15.646 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774673 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.26 reactor_0' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774673 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.26 reactor_0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1774673 1 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1774673 1 idle 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774677 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774677 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1774725 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1774673 0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1774673 0 busy 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:15.907 17:53:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774673 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.27 reactor_0' 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774673 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.27 reactor_0 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:16.167 17:53:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:37:17.133 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:37:17.133 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:17.133 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:17.133 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774673 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.55 reactor_0' 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774673 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.55 reactor_0 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1774673 1 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1774673 1 busy 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:17.393 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774677 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.34 reactor_1' 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774677 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.34 reactor_1 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:17.652 17:53:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1774725 00:37:27.654 Initializing NVMe Controllers 00:37:27.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:27.654 Controller IO queue size 256, less than required. 00:37:27.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:27.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:27.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:27.654 Initialization complete. Launching workers. 00:37:27.654 ======================================================== 00:37:27.654 Latency(us) 00:37:27.654 Device Information : IOPS MiB/s Average min max 00:37:27.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19398.53 75.78 13201.98 3621.55 51021.43 00:37:27.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 21265.01 83.07 12039.76 7556.06 29646.08 00:37:27.654 ======================================================== 00:37:27.654 Total : 40663.54 158.84 12594.20 3621.55 51021.43 00:37:27.654 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1774673 0 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1774673 0 idle 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774673 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.28 reactor_0' 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774673 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.28 reactor_0 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1774673 1 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1774673 1 idle 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:27.654 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774677 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774677 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:27.655 17:53:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:27.655 17:53:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:27.655 17:53:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:27.655 17:53:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:27.655 17:53:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:27.655 17:53:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1774673 0 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1774673 0 idle 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774673 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.61 reactor_0' 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774673 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.61 reactor_0 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1774673 1 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1774673 1 idle 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1774673 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1774673 -w 256 00:37:29.571 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1774677 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1774677 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:29.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:29.832 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:29.833 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:29.833 rmmod nvme_tcp 00:37:30.094 rmmod nvme_fabrics 00:37:30.094 rmmod nvme_keyring 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1774673 ']' 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1774673 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1774673 ']' 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1774673 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:30.094 17:53:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1774673 00:37:30.094 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:30.094 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:30.094 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1774673' 00:37:30.094 killing process with pid 1774673 00:37:30.094 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1774673 00:37:30.094 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1774673 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:30.356 17:53:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:32.269 17:53:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:32.269 00:37:32.269 real 0m25.063s 00:37:32.269 user 0m40.671s 00:37:32.269 sys 0m9.078s 00:37:32.269 17:53:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.269 17:53:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:32.269 ************************************ 00:37:32.269 END TEST nvmf_interrupt 00:37:32.269 ************************************ 00:37:32.269 00:37:32.270 real 30m2.251s 00:37:32.270 user 63m5.720s 00:37:32.270 sys 13m29.819s 00:37:32.270 17:53:24 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.270 17:53:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:32.270 ************************************ 00:37:32.270 END TEST nvmf_tcp 00:37:32.270 ************************************ 00:37:32.270 17:53:24 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:32.270 17:53:24 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:32.270 17:53:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:32.270 17:53:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:32.270 17:53:24 -- common/autotest_common.sh@10 -- # set +x 00:37:32.532 ************************************ 00:37:32.533 START TEST spdkcli_nvmf_tcp 00:37:32.533 ************************************ 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:32.533 * Looking for test storage... 00:37:32.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:32.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.533 --rc genhtml_branch_coverage=1 00:37:32.533 --rc genhtml_function_coverage=1 00:37:32.533 --rc genhtml_legend=1 00:37:32.533 --rc geninfo_all_blocks=1 00:37:32.533 --rc geninfo_unexecuted_blocks=1 00:37:32.533 00:37:32.533 ' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:32.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.533 --rc genhtml_branch_coverage=1 00:37:32.533 --rc genhtml_function_coverage=1 00:37:32.533 --rc genhtml_legend=1 00:37:32.533 --rc geninfo_all_blocks=1 00:37:32.533 --rc geninfo_unexecuted_blocks=1 00:37:32.533 00:37:32.533 ' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:32.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.533 --rc genhtml_branch_coverage=1 00:37:32.533 --rc genhtml_function_coverage=1 00:37:32.533 --rc genhtml_legend=1 00:37:32.533 --rc geninfo_all_blocks=1 00:37:32.533 --rc geninfo_unexecuted_blocks=1 00:37:32.533 00:37:32.533 ' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:32.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.533 --rc genhtml_branch_coverage=1 00:37:32.533 --rc genhtml_function_coverage=1 00:37:32.533 --rc genhtml_legend=1 00:37:32.533 --rc geninfo_all_blocks=1 00:37:32.533 --rc geninfo_unexecuted_blocks=1 00:37:32.533 00:37:32.533 ' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.533 17:53:24 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:32.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:32.534 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1775091 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1775091 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1775091 ']' 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.795 17:53:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:32.795 [2024-12-06 17:53:24.653508] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:37:32.795 [2024-12-06 17:53:24.653553] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775091 ] 00:37:32.795 [2024-12-06 17:53:24.730958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:32.795 [2024-12-06 17:53:24.769054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.795 [2024-12-06 17:53:24.769057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:33.734 17:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:33.734 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:33.734 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:33.734 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:33.734 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:33.734 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:33.734 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:33.734 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:33.734 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:33.734 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:33.734 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:33.734 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:33.734 ' 00:37:36.273 [2024-12-06 17:53:28.198693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.653 [2024-12-06 17:53:29.554883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:40.195 [2024-12-06 17:53:32.078059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:42.242 [2024-12-06 17:53:34.292320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:44.154 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:44.154 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:44.154 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:44.154 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:44.154 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:44.154 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:44.154 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:44.154 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:44.154 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:44.154 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:44.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:44.154 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:44.154 17:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.749 17:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:44.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:44.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:44.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:44.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:44.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:44.749 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:44.749 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:44.749 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:44.749 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:44.750 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:44.750 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:44.750 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:44.750 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:44.750 ' 00:37:51.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:51.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:51.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:51.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:51.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:51.330 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:51.330 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:51.330 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:51.330 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:51.330 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:51.330 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:51.330 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:51.330 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:51.330 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1775091 ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1775091' 00:37:51.330 killing process with pid 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1775091 ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1775091 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1775091 ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1775091 00:37:51.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1775091) - No such process 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1775091 is not found' 00:37:51.330 Process with pid 1775091 is not found 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:51.330 00:37:51.330 real 0m18.083s 00:37:51.330 user 0m40.187s 00:37:51.330 sys 0m0.866s 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.330 17:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.330 ************************************ 00:37:51.330 END TEST spdkcli_nvmf_tcp 00:37:51.330 ************************************ 00:37:51.330 17:53:42 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:51.330 17:53:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:51.330 17:53:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.330 17:53:42 -- common/autotest_common.sh@10 -- # set +x 00:37:51.330 ************************************ 00:37:51.330 START TEST nvmf_identify_passthru 00:37:51.330 ************************************ 00:37:51.330 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:51.330 * Looking for test storage... 00:37:51.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.330 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.330 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.330 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.330 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:51.330 17:53:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:51.331 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.331 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.331 --rc genhtml_branch_coverage=1 00:37:51.331 --rc genhtml_function_coverage=1 00:37:51.331 --rc genhtml_legend=1 00:37:51.331 --rc geninfo_all_blocks=1 00:37:51.331 --rc geninfo_unexecuted_blocks=1 00:37:51.331 00:37:51.331 ' 00:37:51.331 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.331 --rc genhtml_branch_coverage=1 00:37:51.331 --rc genhtml_function_coverage=1 00:37:51.331 --rc genhtml_legend=1 00:37:51.331 --rc geninfo_all_blocks=1 00:37:51.331 --rc geninfo_unexecuted_blocks=1 00:37:51.331 00:37:51.331 ' 00:37:51.331 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.331 --rc genhtml_branch_coverage=1 00:37:51.331 --rc genhtml_function_coverage=1 00:37:51.331 --rc genhtml_legend=1 00:37:51.331 --rc geninfo_all_blocks=1 00:37:51.331 --rc geninfo_unexecuted_blocks=1 00:37:51.331 00:37:51.331 ' 00:37:51.331 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.331 --rc genhtml_branch_coverage=1 00:37:51.331 --rc genhtml_function_coverage=1 00:37:51.331 --rc genhtml_legend=1 00:37:51.331 --rc geninfo_all_blocks=1 00:37:51.331 --rc geninfo_unexecuted_blocks=1 00:37:51.331 00:37:51.331 ' 00:37:51.331 17:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.331 17:53:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.331 17:53:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.331 17:53:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.331 17:53:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:51.331 17:53:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:51.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.331 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.331 17:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.331 17:53:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.331 17:53:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.332 17:53:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.332 17:53:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.332 17:53:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:51.332 17:53:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.332 17:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.332 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:51.332 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.332 17:53:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.332 17:53:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:57.912 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:57.913 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:57.913 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:57.913 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:57.913 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:57.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:37:57.913 00:37:57.913 --- 10.0.0.2 ping statistics --- 00:37:57.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.913 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:57.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:37:57.913 00:37:57.913 --- 10.0.0.1 ping statistics --- 00:37:57.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.913 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:57.913 17:53:49 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:57.913 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:57.913 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:57.913 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:58.173 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:58.173 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:58.173 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:58.173 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:58.173 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:58.173 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:58.173 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:58.173 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:58.741 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:37:58.741 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:58.741 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:58.741 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:59.002 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:59.002 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:59.002 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.002 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.262 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.262 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1777819 00:37:59.262 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:59.262 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:59.262 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1777819 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1777819 ']' 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.262 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:59.262 [2024-12-06 17:53:51.131211] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:37:59.262 [2024-12-06 17:53:51.131262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.263 [2024-12-06 17:53:51.222161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:59.263 [2024-12-06 17:53:51.259747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.263 [2024-12-06 17:53:51.259785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.263 [2024-12-06 17:53:51.259793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.263 [2024-12-06 17:53:51.259800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.263 [2024-12-06 17:53:51.259805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.263 [2024-12-06 17:53:51.261578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.263 [2024-12-06 17:53:51.261735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.263 [2024-12-06 17:53:51.261978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.263 [2024-12-06 17:53:51.261978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:38:00.203 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.203 INFO: Log level set to 20 00:38:00.203 INFO: Requests: 00:38:00.203 { 00:38:00.203 "jsonrpc": "2.0", 00:38:00.203 "method": "nvmf_set_config", 00:38:00.203 "id": 1, 00:38:00.203 "params": { 00:38:00.203 "admin_cmd_passthru": { 00:38:00.203 "identify_ctrlr": true 00:38:00.203 } 00:38:00.203 } 00:38:00.203 } 00:38:00.203 00:38:00.203 INFO: response: 00:38:00.203 { 00:38:00.203 "jsonrpc": "2.0", 00:38:00.203 "id": 1, 00:38:00.203 "result": true 00:38:00.203 } 00:38:00.203 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.203 17:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.203 17:53:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.203 INFO: Setting log level to 20 00:38:00.203 INFO: Setting log level to 20 00:38:00.203 INFO: Log level set to 20 00:38:00.203 INFO: Log level set to 20 00:38:00.203 INFO: Requests: 00:38:00.203 { 00:38:00.204 "jsonrpc": "2.0", 00:38:00.204 "method": "framework_start_init", 00:38:00.204 "id": 1 00:38:00.204 } 00:38:00.204 00:38:00.204 INFO: Requests: 00:38:00.204 { 00:38:00.204 "jsonrpc": "2.0", 00:38:00.204 "method": "framework_start_init", 00:38:00.204 "id": 1 00:38:00.204 } 00:38:00.204 00:38:00.204 [2024-12-06 17:53:52.046003] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:00.204 INFO: response: 00:38:00.204 { 00:38:00.204 "jsonrpc": "2.0", 00:38:00.204 "id": 1, 00:38:00.204 "result": true 00:38:00.204 } 00:38:00.204 00:38:00.204 INFO: response: 00:38:00.204 { 00:38:00.204 "jsonrpc": "2.0", 00:38:00.204 "id": 1, 00:38:00.204 "result": true 00:38:00.204 } 00:38:00.204 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.204 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.204 INFO: Setting log level to 40 00:38:00.204 INFO: Setting log level to 40 00:38:00.204 INFO: Setting log level to 40 00:38:00.204 [2024-12-06 17:53:52.059554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.204 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.204 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.204 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.463 Nvme0n1 00:38:00.463 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.463 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.464 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.464 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.464 [2024-12-06 17:53:52.458291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.464 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.464 [ 00:38:00.464 { 00:38:00.464 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:00.464 "subtype": "Discovery", 00:38:00.464 "listen_addresses": [], 00:38:00.464 "allow_any_host": true, 00:38:00.464 "hosts": [] 00:38:00.464 }, 00:38:00.464 { 00:38:00.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:00.464 "subtype": "NVMe", 00:38:00.464 "listen_addresses": [ 00:38:00.464 { 00:38:00.464 "trtype": "TCP", 00:38:00.464 "adrfam": "IPv4", 00:38:00.464 "traddr": "10.0.0.2", 00:38:00.464 "trsvcid": "4420" 00:38:00.464 } 00:38:00.464 ], 00:38:00.464 "allow_any_host": true, 00:38:00.464 "hosts": [], 00:38:00.464 "serial_number": "SPDK00000000000001", 00:38:00.464 "model_number": "SPDK bdev Controller", 00:38:00.464 "max_namespaces": 1, 00:38:00.464 "min_cntlid": 1, 00:38:00.464 "max_cntlid": 65519, 00:38:00.464 "namespaces": [ 00:38:00.464 { 00:38:00.464 "nsid": 1, 00:38:00.464 "bdev_name": "Nvme0n1", 00:38:00.464 "name": "Nvme0n1", 00:38:00.464 "nguid": "36344730526054870025384500000044", 00:38:00.464 "uuid": "36344730-5260-5487-0025-384500000044" 00:38:00.464 } 00:38:00.464 ] 00:38:00.464 } 00:38:00.464 ] 00:38:00.464 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.464 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:00.464 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:00.464 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:00.726 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:38:00.726 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:00.726 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:00.726 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:00.987 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:00.987 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.987 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:00.987 17:53:52 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.987 rmmod nvme_tcp 00:38:00.987 rmmod nvme_fabrics 00:38:00.987 rmmod nvme_keyring 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1777819 ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1777819 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1777819 ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1777819 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777819 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777819' 00:38:00.987 killing process with pid 1777819 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1777819 00:38:00.987 17:53:52 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1777819 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:01.247 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.247 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:01.247 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.792 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.792 00:38:03.792 real 0m12.795s 00:38:03.792 user 0m10.191s 00:38:03.792 sys 0m6.352s 00:38:03.792 17:53:55 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.792 17:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:03.792 ************************************ 00:38:03.792 END TEST nvmf_identify_passthru 00:38:03.792 ************************************ 00:38:03.792 17:53:55 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:03.792 17:53:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:03.792 17:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.792 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:38:03.792 ************************************ 00:38:03.792 START TEST nvmf_dif 00:38:03.792 ************************************ 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:03.792 * Looking for test storage... 00:38:03.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:03.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.792 --rc genhtml_branch_coverage=1 00:38:03.792 --rc genhtml_function_coverage=1 00:38:03.792 --rc genhtml_legend=1 00:38:03.792 --rc geninfo_all_blocks=1 00:38:03.792 --rc geninfo_unexecuted_blocks=1 00:38:03.792 00:38:03.792 ' 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:03.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.792 --rc genhtml_branch_coverage=1 00:38:03.792 --rc genhtml_function_coverage=1 00:38:03.792 --rc genhtml_legend=1 00:38:03.792 --rc geninfo_all_blocks=1 00:38:03.792 --rc geninfo_unexecuted_blocks=1 00:38:03.792 00:38:03.792 ' 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:03.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.792 --rc genhtml_branch_coverage=1 00:38:03.792 --rc genhtml_function_coverage=1 00:38:03.792 --rc genhtml_legend=1 00:38:03.792 --rc geninfo_all_blocks=1 00:38:03.792 --rc geninfo_unexecuted_blocks=1 00:38:03.792 00:38:03.792 ' 00:38:03.792 17:53:55 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:03.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.792 --rc genhtml_branch_coverage=1 00:38:03.792 --rc genhtml_function_coverage=1 00:38:03.792 --rc genhtml_legend=1 00:38:03.792 --rc geninfo_all_blocks=1 00:38:03.792 --rc geninfo_unexecuted_blocks=1 00:38:03.792 00:38:03.792 ' 00:38:03.792 17:53:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.792 17:53:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.792 17:53:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.792 17:53:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.792 17:53:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.792 17:53:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:03.792 17:53:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.792 17:53:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:03.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.793 17:53:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:03.793 17:53:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:03.793 17:53:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:03.793 17:53:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:03.793 17:53:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.793 17:53:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:03.793 17:53:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:03.793 17:53:55 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.793 17:53:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:10.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:10.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:10.384 17:54:02 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:10.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:10.385 17:54:02 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:10.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:10.644 17:54:02 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:10.904 17:54:02 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:10.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:10.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:38:10.904 00:38:10.904 --- 10.0.0.2 ping statistics --- 00:38:10.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:10.904 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:38:10.904 17:54:02 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:10.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:10.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:38:10.904 00:38:10.904 --- 10.0.0.1 ping statistics --- 00:38:10.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:10.904 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:10.904 17:54:02 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:10.904 17:54:02 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:38:10.904 17:54:02 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:10.904 17:54:02 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:14.202 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:14.202 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:14.202 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:14.462 17:54:06 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:14.462 17:54:06 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:14.462 17:54:06 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:14.462 17:54:06 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:14.462 17:54:06 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:14.462 17:54:06 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:14.462 17:54:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:14.463 17:54:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:14.463 17:54:06 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.463 17:54:06 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1781286 00:38:14.463 17:54:06 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1781286 00:38:14.463 17:54:06 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1781286 ']' 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.463 17:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.463 [2024-12-06 17:54:06.461232] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:38:14.463 [2024-12-06 17:54:06.461294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.724 [2024-12-06 17:54:06.560731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.724 [2024-12-06 17:54:06.612846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.724 [2024-12-06 17:54:06.612892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.724 [2024-12-06 17:54:06.612900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.724 [2024-12-06 17:54:06.612907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.724 [2024-12-06 17:54:06.612914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.724 [2024-12-06 17:54:06.613704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:15.294 17:54:07 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:15.294 17:54:07 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:15.294 17:54:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:15.294 17:54:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:15.294 [2024-12-06 17:54:07.309216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.294 17:54:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.294 17:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:15.294 ************************************ 00:38:15.294 START TEST fio_dif_1_default 00:38:15.294 ************************************ 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.294 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.555 bdev_null0 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:15.555 [2024-12-06 17:54:07.397568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:15.555 { 00:38:15.555 "params": { 00:38:15.555 "name": "Nvme$subsystem", 00:38:15.555 "trtype": "$TEST_TRANSPORT", 00:38:15.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:15.555 "adrfam": "ipv4", 00:38:15.555 "trsvcid": "$NVMF_PORT", 00:38:15.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:15.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:15.555 "hdgst": ${hdgst:-false}, 00:38:15.555 "ddgst": ${ddgst:-false} 00:38:15.555 }, 00:38:15.555 "method": "bdev_nvme_attach_controller" 00:38:15.555 } 00:38:15.555 EOF 00:38:15.555 )") 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:15.555 "params": { 00:38:15.555 "name": "Nvme0", 00:38:15.555 "trtype": "tcp", 00:38:15.555 "traddr": "10.0.0.2", 00:38:15.555 "adrfam": "ipv4", 00:38:15.555 "trsvcid": "4420", 00:38:15.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:15.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:15.555 "hdgst": false, 00:38:15.555 "ddgst": false 00:38:15.555 }, 00:38:15.555 "method": "bdev_nvme_attach_controller" 00:38:15.555 }' 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:15.555 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:15.815 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:15.815 fio-3.35 00:38:15.815 Starting 1 thread 00:38:28.040 00:38:28.040 filename0: (groupid=0, jobs=1): err= 0: pid=1781500: Fri Dec 6 17:54:18 2024 00:38:28.040 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10026msec) 00:38:28.040 slat (nsec): min=5496, max=74226, avg=6488.01, stdev=2720.26 00:38:28.040 clat (usec): min=918, max=42597, avg=40902.70, stdev=2575.73 00:38:28.040 lat (usec): min=926, max=42637, avg=40909.19, stdev=2574.66 00:38:28.040 clat percentiles (usec): 00:38:28.040 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:28.040 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:28.040 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:28.040 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:28.040 | 99.99th=[42730] 00:38:28.040 bw ( KiB/s): min= 352, max= 448, per=99.75%, avg=390.40, stdev=19.70, samples=20 00:38:28.040 iops : min= 88, max= 112, avg=97.60, stdev= 4.92, samples=20 00:38:28.040 lat (usec) : 1000=0.41% 00:38:28.040 lat (msec) : 50=99.59% 00:38:28.040 cpu : usr=93.61%, sys=6.16%, ctx=15, majf=0, minf=280 00:38:28.040 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:28.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:28.040 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:28.040 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:28.040 00:38:28.040 Run status group 0 (all jobs): 00:38:28.040 READ: bw=391KiB/s (400kB/s), 391KiB/s-391KiB/s (400kB/s-400kB/s), io=3920KiB (4014kB), run=10026-10026msec 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 00:38:28.040 real 0m11.279s 00:38:28.040 user 0m25.154s 00:38:28.040 sys 0m0.931s 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 ************************************ 00:38:28.040 END TEST fio_dif_1_default 00:38:28.040 ************************************ 00:38:28.040 17:54:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:28.040 17:54:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:28.040 17:54:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 ************************************ 00:38:28.040 START TEST fio_dif_1_multi_subsystems 00:38:28.040 ************************************ 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 bdev_null0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 [2024-12-06 17:54:18.759417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 bdev_null1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:28.040 { 00:38:28.040 "params": { 00:38:28.040 "name": "Nvme$subsystem", 00:38:28.040 "trtype": "$TEST_TRANSPORT", 00:38:28.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:28.040 "adrfam": "ipv4", 00:38:28.040 "trsvcid": "$NVMF_PORT", 00:38:28.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:28.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:28.040 "hdgst": ${hdgst:-false}, 00:38:28.040 "ddgst": ${ddgst:-false} 00:38:28.040 }, 00:38:28.040 "method": "bdev_nvme_attach_controller" 00:38:28.040 } 00:38:28.040 EOF 00:38:28.040 )") 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:28.040 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:28.041 { 00:38:28.041 "params": { 00:38:28.041 "name": "Nvme$subsystem", 00:38:28.041 "trtype": "$TEST_TRANSPORT", 00:38:28.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:28.041 "adrfam": "ipv4", 00:38:28.041 "trsvcid": "$NVMF_PORT", 00:38:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:28.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:28.041 "hdgst": ${hdgst:-false}, 00:38:28.041 "ddgst": ${ddgst:-false} 00:38:28.041 }, 00:38:28.041 "method": "bdev_nvme_attach_controller" 00:38:28.041 } 00:38:28.041 EOF 00:38:28.041 )") 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:28.041 "params": { 00:38:28.041 "name": "Nvme0", 00:38:28.041 "trtype": "tcp", 00:38:28.041 "traddr": "10.0.0.2", 00:38:28.041 "adrfam": "ipv4", 00:38:28.041 "trsvcid": "4420", 00:38:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:28.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:28.041 "hdgst": false, 00:38:28.041 "ddgst": false 00:38:28.041 }, 00:38:28.041 "method": "bdev_nvme_attach_controller" 00:38:28.041 },{ 00:38:28.041 "params": { 00:38:28.041 "name": "Nvme1", 00:38:28.041 "trtype": "tcp", 00:38:28.041 "traddr": "10.0.0.2", 00:38:28.041 "adrfam": "ipv4", 00:38:28.041 "trsvcid": "4420", 00:38:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:28.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:28.041 "hdgst": false, 00:38:28.041 "ddgst": false 00:38:28.041 }, 00:38:28.041 "method": "bdev_nvme_attach_controller" 00:38:28.041 }' 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:28.041 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:28.041 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:28.041 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:28.041 fio-3.35 00:38:28.041 Starting 2 threads 00:38:38.033 00:38:38.033 filename0: (groupid=0, jobs=1): err= 0: pid=1782288: Fri Dec 6 17:54:29 2024 00:38:38.033 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10004msec) 00:38:38.033 slat (nsec): min=5485, max=41793, avg=8859.23, stdev=6970.07 00:38:38.033 clat (usec): min=834, max=42372, avg=41142.94, stdev=2632.98 00:38:38.034 lat (usec): min=840, max=42412, avg=41151.80, stdev=2633.94 00:38:38.034 clat percentiles (usec): 00:38:38.034 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:38.034 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:38.034 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:38.034 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:38.034 | 99.99th=[42206] 00:38:38.034 bw ( KiB/s): min= 351, max= 416, per=49.80%, avg=387.32, stdev=14.82, samples=19 00:38:38.034 iops : min= 87, max= 104, avg=96.79, stdev= 3.81, samples=19 00:38:38.034 lat (usec) : 1000=0.41% 00:38:38.034 lat (msec) : 50=99.59% 00:38:38.034 cpu : usr=95.36%, sys=4.46%, ctx=13, majf=0, minf=189 00:38:38.034 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.034 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.034 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:38.034 filename1: (groupid=0, jobs=1): err= 0: pid=1782289: Fri Dec 6 17:54:29 2024 00:38:38.034 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10006msec) 00:38:38.034 slat (nsec): min=5480, max=39510, avg=8761.01, stdev=6921.99 00:38:38.034 clat (usec): min=840, max=42842, avg=41150.78, stdev=2636.26 00:38:38.034 lat (usec): min=845, max=42849, avg=41159.54, stdev=2637.18 00:38:38.034 clat percentiles (usec): 00:38:38.034 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:38:38.034 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:38.034 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:38.034 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:38.034 | 99.99th=[42730] 00:38:38.034 bw ( KiB/s): min= 352, max= 416, per=49.80%, avg=387.20, stdev=14.31, samples=20 00:38:38.034 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:38:38.034 lat (usec) : 1000=0.41% 00:38:38.034 lat (msec) : 50=99.59% 00:38:38.034 cpu : usr=95.91%, sys=3.90%, ctx=12, majf=0, minf=102 00:38:38.034 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:38.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:38.034 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:38.034 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:38.034 00:38:38.034 Run status group 0 (all jobs): 00:38:38.034 READ: bw=777KiB/s (796kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=7776KiB (7963kB), run=10004-10006msec 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 00:38:38.295 real 0m11.449s 00:38:38.295 user 0m36.383s 00:38:38.295 sys 0m1.131s 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 ************************************ 00:38:38.295 END TEST fio_dif_1_multi_subsystems 00:38:38.295 ************************************ 00:38:38.295 17:54:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:38.295 17:54:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:38.295 17:54:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 ************************************ 00:38:38.295 START TEST fio_dif_rand_params 00:38:38.295 ************************************ 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 bdev_null0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:38.295 [2024-12-06 17:54:30.291032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:38.295 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:38.295 { 00:38:38.295 "params": { 00:38:38.295 "name": "Nvme$subsystem", 00:38:38.295 "trtype": "$TEST_TRANSPORT", 00:38:38.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.295 "adrfam": "ipv4", 00:38:38.295 "trsvcid": "$NVMF_PORT", 00:38:38.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.295 "hdgst": ${hdgst:-false}, 00:38:38.296 "ddgst": ${ddgst:-false} 00:38:38.296 }, 00:38:38.296 "method": "bdev_nvme_attach_controller" 00:38:38.296 } 00:38:38.296 EOF 00:38:38.296 )") 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:38.296 "params": { 00:38:38.296 "name": "Nvme0", 00:38:38.296 "trtype": "tcp", 00:38:38.296 "traddr": "10.0.0.2", 00:38:38.296 "adrfam": "ipv4", 00:38:38.296 "trsvcid": "4420", 00:38:38.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:38.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:38.296 "hdgst": false, 00:38:38.296 "ddgst": false 00:38:38.296 }, 00:38:38.296 "method": "bdev_nvme_attach_controller" 00:38:38.296 }' 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:38.296 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:38.576 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:38.576 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:38.576 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:38.576 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.840 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:38.840 ... 00:38:38.840 fio-3.35 00:38:38.840 Starting 3 threads 00:38:45.425 00:38:45.425 filename0: (groupid=0, jobs=1): err= 0: pid=1782618: Fri Dec 6 17:54:36 2024 00:38:45.425 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(204MiB/5044msec) 00:38:45.425 slat (nsec): min=5704, max=32033, avg=8473.05, stdev=1147.51 00:38:45.425 clat (usec): min=4705, max=90960, avg=9246.51, stdev=6819.12 00:38:45.425 lat (usec): min=4714, max=90969, avg=9254.99, stdev=6819.26 00:38:45.425 clat percentiles (usec): 00:38:45.425 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7242], 00:38:45.425 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8455], 00:38:45.425 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:38:45.425 | 99.00th=[47973], 99.50th=[50594], 99.90th=[89654], 99.95th=[90702], 00:38:45.425 | 99.99th=[90702] 00:38:45.425 bw ( KiB/s): min=33024, max=46080, per=40.38%, avg=41676.80, stdev=3758.16, samples=10 00:38:45.425 iops : min= 258, max= 360, avg=325.60, stdev=29.36, samples=10 00:38:45.425 lat (msec) : 10=82.27%, 20=15.64%, 50=1.53%, 100=0.55% 00:38:45.425 cpu : usr=94.25%, sys=5.53%, ctx=8, majf=0, minf=88 00:38:45.425 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.425 issued rwts: total=1630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:45.425 filename0: (groupid=0, jobs=1): err= 0: pid=1782619: Fri Dec 6 17:54:36 2024 00:38:45.425 read: IOPS=328, BW=41.1MiB/s (43.1MB/s)(208MiB/5046msec) 00:38:45.425 slat (nsec): min=8043, max=32314, avg=8741.65, stdev=1005.47 00:38:45.425 clat (usec): min=4293, max=88967, avg=9082.13, stdev=8049.43 00:38:45.425 lat (usec): min=4302, max=88976, avg=9090.87, stdev=8049.51 00:38:45.425 clat percentiles (usec): 00:38:45.425 | 1.00th=[ 5014], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6390], 00:38:45.425 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 7898], 00:38:45.425 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[10814], 00:38:45.425 | 99.00th=[49021], 99.50th=[49021], 99.90th=[88605], 99.95th=[88605], 00:38:45.425 | 99.99th=[88605] 00:38:45.425 bw ( KiB/s): min=13568, max=50432, per=41.12%, avg=42444.80, stdev=10675.74, samples=10 00:38:45.425 iops : min= 106, max= 394, avg=331.60, stdev=83.40, samples=10 00:38:45.425 lat (msec) : 10=89.88%, 20=6.81%, 50=2.89%, 100=0.42% 00:38:45.425 cpu : usr=95.06%, sys=4.72%, ctx=7, majf=0, minf=145 00:38:45.425 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.425 issued rwts: total=1660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:45.425 filename0: (groupid=0, jobs=1): err= 0: pid=1782620: Fri Dec 6 17:54:36 2024 00:38:45.425 read: IOPS=154, BW=19.3MiB/s (20.2MB/s)(97.4MiB/5043msec) 00:38:45.425 slat (nsec): min=5515, max=33053, avg=6252.51, stdev=1079.84 00:38:45.425 clat (usec): min=4647, max=92016, avg=19358.57, stdev=21342.41 00:38:45.425 lat (usec): min=4652, max=92023, avg=19364.82, stdev=21342.48 00:38:45.425 clat percentiles (usec): 00:38:45.425 | 1.00th=[ 5145], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7570], 00:38:45.425 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:38:45.425 | 70.00th=[ 9765], 80.00th=[47973], 90.00th=[50070], 95.00th=[51119], 00:38:45.425 | 99.00th=[90702], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:38:45.425 | 99.99th=[91751] 00:38:45.425 bw ( KiB/s): min=12544, max=27648, per=19.27%, avg=19891.20, stdev=4430.85, samples=10 00:38:45.425 iops : min= 98, max= 216, avg=155.40, stdev=34.62, samples=10 00:38:45.425 lat (msec) : 10=71.89%, 20=4.49%, 50=14.12%, 100=9.50% 00:38:45.425 cpu : usr=95.30%, sys=4.50%, ctx=13, majf=0, minf=119 00:38:45.425 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.425 issued rwts: total=779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.425 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:45.425 00:38:45.425 Run status group 0 (all jobs): 00:38:45.425 READ: bw=101MiB/s (106MB/s), 19.3MiB/s-41.1MiB/s (20.2MB/s-43.1MB/s), io=509MiB (533MB), run=5043-5046msec 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:45.425 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 bdev_null0 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 [2024-12-06 17:54:36.409982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 bdev_null1 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 bdev_null2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:45.426 { 00:38:45.426 "params": { 00:38:45.426 "name": "Nvme$subsystem", 00:38:45.426 "trtype": "$TEST_TRANSPORT", 00:38:45.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.426 "adrfam": "ipv4", 00:38:45.426 "trsvcid": "$NVMF_PORT", 00:38:45.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.426 "hdgst": ${hdgst:-false}, 00:38:45.426 "ddgst": ${ddgst:-false} 00:38:45.426 }, 00:38:45.426 "method": "bdev_nvme_attach_controller" 00:38:45.426 } 00:38:45.426 EOF 00:38:45.426 )") 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:45.426 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:45.426 { 00:38:45.426 "params": { 00:38:45.426 "name": "Nvme$subsystem", 00:38:45.426 "trtype": "$TEST_TRANSPORT", 00:38:45.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.427 "adrfam": "ipv4", 00:38:45.427 "trsvcid": "$NVMF_PORT", 00:38:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.427 "hdgst": ${hdgst:-false}, 00:38:45.427 "ddgst": ${ddgst:-false} 00:38:45.427 }, 00:38:45.427 "method": "bdev_nvme_attach_controller" 00:38:45.427 } 00:38:45.427 EOF 00:38:45.427 )") 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:45.427 { 00:38:45.427 "params": { 00:38:45.427 "name": "Nvme$subsystem", 00:38:45.427 "trtype": "$TEST_TRANSPORT", 00:38:45.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.427 "adrfam": "ipv4", 00:38:45.427 "trsvcid": "$NVMF_PORT", 00:38:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.427 "hdgst": ${hdgst:-false}, 00:38:45.427 "ddgst": ${ddgst:-false} 00:38:45.427 }, 00:38:45.427 "method": "bdev_nvme_attach_controller" 00:38:45.427 } 00:38:45.427 EOF 00:38:45.427 )") 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:45.427 "params": { 00:38:45.427 "name": "Nvme0", 00:38:45.427 "trtype": "tcp", 00:38:45.427 "traddr": "10.0.0.2", 00:38:45.427 "adrfam": "ipv4", 00:38:45.427 "trsvcid": "4420", 00:38:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.427 "hdgst": false, 00:38:45.427 "ddgst": false 00:38:45.427 }, 00:38:45.427 "method": "bdev_nvme_attach_controller" 00:38:45.427 },{ 00:38:45.427 "params": { 00:38:45.427 "name": "Nvme1", 00:38:45.427 "trtype": "tcp", 00:38:45.427 "traddr": "10.0.0.2", 00:38:45.427 "adrfam": "ipv4", 00:38:45.427 "trsvcid": "4420", 00:38:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:45.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:45.427 "hdgst": false, 00:38:45.427 "ddgst": false 00:38:45.427 }, 00:38:45.427 "method": "bdev_nvme_attach_controller" 00:38:45.427 },{ 00:38:45.427 "params": { 00:38:45.427 "name": "Nvme2", 00:38:45.427 "trtype": "tcp", 00:38:45.427 "traddr": "10.0.0.2", 00:38:45.427 "adrfam": "ipv4", 00:38:45.427 "trsvcid": "4420", 00:38:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:45.427 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:45.427 "hdgst": false, 00:38:45.427 "ddgst": false 00:38:45.427 }, 00:38:45.427 "method": "bdev_nvme_attach_controller" 00:38:45.427 }' 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:45.427 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:45.427 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:45.427 ... 00:38:45.427 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:45.427 ... 00:38:45.427 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:45.427 ... 00:38:45.427 fio-3.35 00:38:45.427 Starting 24 threads 00:38:57.716 00:38:57.716 filename0: (groupid=0, jobs=1): err= 0: pid=1782871: Fri Dec 6 17:54:48 2024 00:38:57.716 read: IOPS=707, BW=2828KiB/s (2896kB/s)(27.6MiB/10011msec) 00:38:57.716 slat (nsec): min=5665, max=63161, avg=6753.26, stdev=2421.15 00:38:57.716 clat (usec): min=4971, max=33877, avg=22573.59, stdev=3372.13 00:38:57.716 lat (usec): min=4989, max=33883, avg=22580.34, stdev=3371.91 00:38:57.716 clat percentiles (usec): 00:38:57.716 | 1.00th=[13173], 5.00th=[15401], 10.00th=[16450], 20.00th=[22676], 00:38:57.716 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.716 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:38:57.716 | 99.00th=[25560], 99.50th=[25822], 99.90th=[28967], 99.95th=[32375], 00:38:57.716 | 99.99th=[33817] 00:38:57.716 bw ( KiB/s): min= 2560, max= 3888, per=4.41%, avg=2826.10, stdev=398.68, samples=20 00:38:57.716 iops : min= 640, max= 972, avg=706.45, stdev=99.66, samples=20 00:38:57.716 lat (msec) : 10=0.45%, 20=19.30%, 50=80.25% 00:38:57.716 cpu : usr=99.01%, sys=0.77%, ctx=13, majf=0, minf=42 00:38:57.716 IO depths : 1=5.0%, 2=10.0%, 4=21.3%, 8=56.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:38:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.716 complete : 0=0.0%, 4=93.0%, 8=1.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.716 issued rwts: total=7078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.716 filename0: (groupid=0, jobs=1): err= 0: pid=1782872: Fri Dec 6 17:54:48 2024 00:38:57.716 read: IOPS=658, BW=2634KiB/s (2698kB/s)(25.8MiB/10012msec) 00:38:57.716 slat (nsec): min=5696, max=78601, avg=20193.77, stdev=13369.23 00:38:57.716 clat (usec): min=11289, max=45324, avg=24112.89, stdev=2110.00 00:38:57.716 lat (usec): min=11295, max=45335, avg=24133.09, stdev=2110.44 00:38:57.716 clat percentiles (usec): 00:38:57.716 | 1.00th=[16319], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:38:57.716 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.716 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:38:57.716 | 99.00th=[33817], 99.50th=[35390], 99.90th=[39060], 99.95th=[45351], 00:38:57.716 | 99.99th=[45351] 00:38:57.716 bw ( KiB/s): min= 2522, max= 2704, per=4.10%, avg=2627.58, stdev=65.08, samples=19 00:38:57.716 iops : min= 630, max= 676, avg=656.84, stdev=16.29, samples=19 00:38:57.716 lat (msec) : 20=2.99%, 50=97.01% 00:38:57.716 cpu : usr=98.97%, sys=0.80%, ctx=13, majf=0, minf=49 00:38:57.716 IO depths : 1=4.3%, 2=10.0%, 4=23.6%, 8=53.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:38:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.716 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename0: (groupid=0, jobs=1): err= 0: pid=1782873: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=660, BW=2640KiB/s (2703kB/s)(25.8MiB/10003msec) 00:38:57.717 slat (nsec): min=5678, max=61185, avg=16984.50, stdev=10440.54 00:38:57.717 clat (usec): min=4636, max=47217, avg=24080.11, stdev=1910.31 00:38:57.717 lat (usec): min=4642, max=47238, avg=24097.09, stdev=1910.69 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[17433], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:38:57.717 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.717 | 99.00th=[25822], 99.50th=[31065], 99.90th=[46924], 99.95th=[47449], 00:38:57.717 | 99.99th=[47449] 00:38:57.717 bw ( KiB/s): min= 2436, max= 2688, per=4.10%, avg=2626.95, stdev=77.26, samples=19 00:38:57.717 iops : min= 609, max= 672, avg=656.68, stdev=19.28, samples=19 00:38:57.717 lat (msec) : 10=0.33%, 20=1.18%, 50=98.49% 00:38:57.717 cpu : usr=98.98%, sys=0.78%, ctx=12, majf=0, minf=34 00:38:57.717 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename0: (groupid=0, jobs=1): err= 0: pid=1782874: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10008msec) 00:38:57.717 slat (nsec): min=5675, max=74317, avg=9406.60, stdev=6178.14 00:38:57.717 clat (usec): min=4534, max=31716, avg=23977.56, stdev=1929.06 00:38:57.717 lat (usec): min=4547, max=31723, avg=23986.97, stdev=1927.58 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[11731], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:38:57.717 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.717 | 99.00th=[25822], 99.50th=[25822], 99.90th=[30540], 99.95th=[31327], 00:38:57.717 | 99.99th=[31589] 00:38:57.717 bw ( KiB/s): min= 2554, max= 2944, per=4.15%, avg=2660.74, stdev=91.68, samples=19 00:38:57.717 iops : min= 638, max= 736, avg=665.16, stdev=22.95, samples=19 00:38:57.717 lat (msec) : 10=0.72%, 20=1.37%, 50=97.91% 00:38:57.717 cpu : usr=98.89%, sys=0.88%, ctx=12, majf=0, minf=37 00:38:57.717 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename0: (groupid=0, jobs=1): err= 0: pid=1782875: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=685, BW=2743KiB/s (2808kB/s)(26.8MiB/10014msec) 00:38:57.717 slat (nsec): min=5669, max=81960, avg=12845.50, stdev=10825.90 00:38:57.717 clat (usec): min=5730, max=38699, avg=23234.20, stdev=3050.28 00:38:57.717 lat (usec): min=5744, max=38709, avg=23247.04, stdev=3051.66 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[12125], 5.00th=[16188], 10.00th=[17171], 20.00th=[23725], 00:38:57.717 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.717 | 99.00th=[30016], 99.50th=[31327], 99.90th=[37487], 99.95th=[38536], 00:38:57.717 | 99.99th=[38536] 00:38:57.717 bw ( KiB/s): min= 2560, max= 3808, per=4.28%, avg=2739.70, stdev=288.43, samples=20 00:38:57.717 iops : min= 640, max= 952, avg=684.90, stdev=72.11, samples=20 00:38:57.717 lat (msec) : 10=0.58%, 20=11.39%, 50=88.03% 00:38:57.717 cpu : usr=98.30%, sys=1.12%, ctx=197, majf=0, minf=33 00:38:57.717 IO depths : 1=5.1%, 2=10.6%, 4=22.7%, 8=54.1%, 16=7.5%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename0: (groupid=0, jobs=1): err= 0: pid=1782876: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=679, BW=2720KiB/s (2785kB/s)(26.6MiB/10010msec) 00:38:57.717 slat (nsec): min=5655, max=88396, avg=17891.41, stdev=13812.59 00:38:57.717 clat (usec): min=11784, max=44915, avg=23389.62, stdev=3448.49 00:38:57.717 lat (usec): min=11790, max=44940, avg=23407.51, stdev=3451.28 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[14877], 5.00th=[15926], 10.00th=[17171], 20.00th=[23462], 00:38:57.717 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26870], 00:38:57.717 | 99.00th=[33817], 99.50th=[36439], 99.90th=[39584], 99.95th=[44827], 00:38:57.717 | 99.99th=[44827] 00:38:57.717 bw ( KiB/s): min= 2554, max= 3136, per=4.25%, avg=2723.89, stdev=168.30, samples=19 00:38:57.717 iops : min= 638, max= 784, avg=680.95, stdev=42.10, samples=19 00:38:57.717 lat (msec) : 20=13.59%, 50=86.41% 00:38:57.717 cpu : usr=98.66%, sys=1.00%, ctx=103, majf=0, minf=73 00:38:57.717 IO depths : 1=4.2%, 2=8.8%, 4=20.5%, 8=58.1%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename0: (groupid=0, jobs=1): err= 0: pid=1782877: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=661, BW=2646KiB/s (2709kB/s)(25.9MiB/10014msec) 00:38:57.717 slat (nsec): min=5691, max=86777, avg=17824.42, stdev=14417.50 00:38:57.717 clat (usec): min=9654, max=34282, avg=24044.70, stdev=1400.30 00:38:57.717 lat (usec): min=9667, max=34288, avg=24062.52, stdev=1399.78 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[15926], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:57.717 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.717 | 99.00th=[25560], 99.50th=[26346], 99.90th=[33162], 99.95th=[33817], 00:38:57.717 | 99.99th=[34341] 00:38:57.717 bw ( KiB/s): min= 2560, max= 2693, per=4.13%, avg=2643.45, stdev=62.84, samples=20 00:38:57.717 iops : min= 640, max= 673, avg=660.85, stdev=15.70, samples=20 00:38:57.717 lat (msec) : 10=0.24%, 20=1.15%, 50=98.61% 00:38:57.717 cpu : usr=98.83%, sys=0.94%, ctx=14, majf=0, minf=31 00:38:57.717 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename0: (groupid=0, jobs=1): err= 0: pid=1782878: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=662, BW=2651KiB/s (2715kB/s)(25.9MiB/10008msec) 00:38:57.717 slat (nsec): min=5652, max=82268, avg=18727.00, stdev=13597.97 00:38:57.717 clat (usec): min=8793, max=39500, avg=23985.92, stdev=2852.21 00:38:57.717 lat (usec): min=8799, max=39512, avg=24004.65, stdev=2853.45 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[14615], 5.00th=[17957], 10.00th=[23462], 20.00th=[23725], 00:38:57.717 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26346], 00:38:57.717 | 99.00th=[35390], 99.50th=[37487], 99.90th=[39060], 99.95th=[39584], 00:38:57.717 | 99.99th=[39584] 00:38:57.717 bw ( KiB/s): min= 2560, max= 2736, per=4.13%, avg=2642.11, stdev=59.38, samples=19 00:38:57.717 iops : min= 640, max= 684, avg=660.53, stdev=14.85, samples=19 00:38:57.717 lat (msec) : 10=0.11%, 20=6.30%, 50=93.59% 00:38:57.717 cpu : usr=98.99%, sys=0.77%, ctx=27, majf=0, minf=35 00:38:57.717 IO depths : 1=2.6%, 2=7.2%, 4=20.2%, 8=59.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename1: (groupid=0, jobs=1): err= 0: pid=1782879: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=659, BW=2640KiB/s (2703kB/s)(25.8MiB/10010msec) 00:38:57.717 slat (nsec): min=5742, max=70045, avg=20829.18, stdev=11532.63 00:38:57.717 clat (usec): min=11247, max=34642, avg=24059.98, stdev=1068.18 00:38:57.717 lat (usec): min=11256, max=34654, avg=24080.81, stdev=1068.27 00:38:57.717 clat percentiles (usec): 00:38:57.717 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:38:57.717 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.717 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.717 | 99.00th=[25822], 99.50th=[28181], 99.90th=[33162], 99.95th=[33817], 00:38:57.717 | 99.99th=[34866] 00:38:57.717 bw ( KiB/s): min= 2554, max= 2688, per=4.11%, avg=2633.47, stdev=65.06, samples=19 00:38:57.717 iops : min= 638, max= 672, avg=658.32, stdev=16.28, samples=19 00:38:57.717 lat (msec) : 20=0.86%, 50=99.14% 00:38:57.717 cpu : usr=98.83%, sys=0.93%, ctx=13, majf=0, minf=42 00:38:57.717 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.717 issued rwts: total=6606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.717 filename1: (groupid=0, jobs=1): err= 0: pid=1782880: Fri Dec 6 17:54:48 2024 00:38:57.717 read: IOPS=659, BW=2639KiB/s (2703kB/s)(25.8MiB/10015msec) 00:38:57.717 slat (nsec): min=5720, max=77613, avg=18404.66, stdev=11475.16 00:38:57.717 clat (usec): min=15427, max=32076, avg=24097.75, stdev=907.37 00:38:57.717 lat (usec): min=15443, max=32086, avg=24116.16, stdev=906.48 00:38:57.717 clat percentiles (usec): 00:38:57.718 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:38:57.718 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.718 | 99.00th=[25822], 99.50th=[26346], 99.90th=[31327], 99.95th=[31589], 00:38:57.718 | 99.99th=[32113] 00:38:57.718 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2635.95, stdev=62.83, samples=20 00:38:57.718 iops : min= 638, max= 672, avg=658.95, stdev=15.73, samples=20 00:38:57.718 lat (msec) : 20=0.85%, 50=99.15% 00:38:57.718 cpu : usr=98.80%, sys=0.93%, ctx=56, majf=0, minf=37 00:38:57.718 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename1: (groupid=0, jobs=1): err= 0: pid=1782881: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=662, BW=2649KiB/s (2712kB/s)(25.9MiB/10003msec) 00:38:57.718 slat (nsec): min=5525, max=78091, avg=15823.71, stdev=10843.47 00:38:57.718 clat (usec): min=3144, max=54651, avg=24065.91, stdev=2899.19 00:38:57.718 lat (usec): min=3149, max=54671, avg=24081.74, stdev=2899.89 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[15139], 5.00th=[21627], 10.00th=[23462], 20.00th=[23725], 00:38:57.718 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:38:57.718 | 99.00th=[32113], 99.50th=[33817], 99.90th=[54789], 99.95th=[54789], 00:38:57.718 | 99.99th=[54789] 00:38:57.718 bw ( KiB/s): min= 2308, max= 2768, per=4.11%, avg=2632.00, stdev=92.98, samples=19 00:38:57.718 iops : min= 577, max= 692, avg=657.95, stdev=23.27, samples=19 00:38:57.718 lat (msec) : 4=0.12%, 10=0.42%, 20=3.99%, 50=95.23%, 100=0.24% 00:38:57.718 cpu : usr=98.86%, sys=0.88%, ctx=23, majf=0, minf=40 00:38:57.718 IO depths : 1=0.7%, 2=2.9%, 4=11.0%, 8=70.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=91.4%, 8=5.3%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename1: (groupid=0, jobs=1): err= 0: pid=1782882: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=660, BW=2641KiB/s (2704kB/s)(25.8MiB/10004msec) 00:38:57.718 slat (nsec): min=5528, max=51584, avg=14234.21, stdev=8517.01 00:38:57.718 clat (usec): min=4332, max=55051, avg=24127.82, stdev=2207.90 00:38:57.718 lat (usec): min=4348, max=55072, avg=24142.06, stdev=2208.37 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[16581], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:38:57.718 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:38:57.718 | 99.00th=[30802], 99.50th=[32900], 99.90th=[48497], 99.95th=[48497], 00:38:57.718 | 99.99th=[55313] 00:38:57.718 bw ( KiB/s): min= 2416, max= 2688, per=4.10%, avg=2626.74, stdev=75.59, samples=19 00:38:57.718 iops : min= 604, max= 672, avg=656.63, stdev=18.86, samples=19 00:38:57.718 lat (msec) : 10=0.51%, 20=1.51%, 50=97.94%, 100=0.03% 00:38:57.718 cpu : usr=98.55%, sys=1.12%, ctx=45, majf=0, minf=48 00:38:57.718 IO depths : 1=1.9%, 2=8.1%, 4=24.8%, 8=54.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename1: (groupid=0, jobs=1): err= 0: pid=1782883: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=680, BW=2723KiB/s (2788kB/s)(26.6MiB/10012msec) 00:38:57.718 slat (nsec): min=5669, max=86951, avg=17733.40, stdev=13617.50 00:38:57.718 clat (usec): min=11400, max=44803, avg=23351.01, stdev=3351.43 00:38:57.718 lat (usec): min=11408, max=44839, avg=23368.74, stdev=3354.22 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[15008], 5.00th=[16057], 10.00th=[17433], 20.00th=[23462], 00:38:57.718 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25560], 00:38:57.718 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38536], 99.95th=[39060], 00:38:57.718 | 99.99th=[44827] 00:38:57.718 bw ( KiB/s): min= 2560, max= 3312, per=4.25%, avg=2721.32, stdev=214.06, samples=19 00:38:57.718 iops : min= 640, max= 828, avg=680.32, stdev=53.48, samples=19 00:38:57.718 lat (msec) : 20=13.09%, 50=86.91% 00:38:57.718 cpu : usr=98.90%, sys=0.85%, ctx=57, majf=0, minf=30 00:38:57.718 IO depths : 1=4.1%, 2=8.7%, 4=20.7%, 8=57.9%, 16=8.5%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename1: (groupid=0, jobs=1): err= 0: pid=1782884: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=660, BW=2644KiB/s (2707kB/s)(25.9MiB/10020msec) 00:38:57.718 slat (nsec): min=5671, max=84522, avg=18576.14, stdev=13890.25 00:38:57.718 clat (usec): min=7302, max=40378, avg=24045.89, stdev=3026.08 00:38:57.718 lat (usec): min=7319, max=40386, avg=24064.47, stdev=3026.52 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[13435], 5.00th=[18220], 10.00th=[23200], 20.00th=[23725], 00:38:57.718 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25297], 95.00th=[29230], 00:38:57.718 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39584], 99.95th=[40109], 00:38:57.718 | 99.99th=[40633] 00:38:57.718 bw ( KiB/s): min= 2432, max= 2816, per=4.13%, avg=2644.50, stdev=97.83, samples=20 00:38:57.718 iops : min= 608, max= 704, avg=661.10, stdev=24.41, samples=20 00:38:57.718 lat (msec) : 10=0.23%, 20=6.33%, 50=93.45% 00:38:57.718 cpu : usr=98.90%, sys=0.87%, ctx=13, majf=0, minf=37 00:38:57.718 IO depths : 1=4.4%, 2=9.1%, 4=20.4%, 8=57.3%, 16=8.8%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename1: (groupid=0, jobs=1): err= 0: pid=1782885: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=658, BW=2636KiB/s (2699kB/s)(25.8MiB/10004msec) 00:38:57.718 slat (nsec): min=5680, max=80661, avg=20315.55, stdev=13579.50 00:38:57.718 clat (usec): min=10165, max=52903, avg=24078.73, stdev=1839.42 00:38:57.718 lat (usec): min=10172, max=52920, avg=24099.05, stdev=1839.20 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[15533], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:57.718 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.718 | 99.00th=[31851], 99.50th=[33817], 99.90th=[43254], 99.95th=[43254], 00:38:57.718 | 99.99th=[52691] 00:38:57.718 bw ( KiB/s): min= 2432, max= 2688, per=4.10%, avg=2627.37, stdev=78.31, samples=19 00:38:57.718 iops : min= 608, max= 672, avg=656.84, stdev=19.58, samples=19 00:38:57.718 lat (msec) : 20=1.40%, 50=98.57%, 100=0.03% 00:38:57.718 cpu : usr=98.04%, sys=1.28%, ctx=353, majf=0, minf=36 00:38:57.718 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename1: (groupid=0, jobs=1): err= 0: pid=1782886: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=661, BW=2646KiB/s (2710kB/s)(25.9MiB/10012msec) 00:38:57.718 slat (nsec): min=5686, max=81358, avg=14943.20, stdev=13209.10 00:38:57.718 clat (usec): min=10061, max=32925, avg=24065.30, stdev=1260.94 00:38:57.718 lat (usec): min=10072, max=32935, avg=24080.25, stdev=1260.20 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[17957], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:38:57.718 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.718 | 99.00th=[25822], 99.50th=[26346], 99.90th=[31589], 99.95th=[32637], 00:38:57.718 | 99.99th=[32900] 00:38:57.718 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2647.58, stdev=74.55, samples=19 00:38:57.718 iops : min= 640, max= 704, avg=661.89, stdev=18.64, samples=19 00:38:57.718 lat (msec) : 20=1.39%, 50=98.61% 00:38:57.718 cpu : usr=98.94%, sys=0.77%, ctx=114, majf=0, minf=50 00:38:57.718 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.718 filename2: (groupid=0, jobs=1): err= 0: pid=1782887: Fri Dec 6 17:54:48 2024 00:38:57.718 read: IOPS=660, BW=2640KiB/s (2704kB/s)(25.8MiB/10008msec) 00:38:57.718 slat (nsec): min=5552, max=82125, avg=19781.74, stdev=12911.10 00:38:57.718 clat (usec): min=7873, max=43153, avg=24059.47, stdev=1666.61 00:38:57.718 lat (usec): min=7878, max=43176, avg=24079.25, stdev=1666.67 00:38:57.718 clat percentiles (usec): 00:38:57.718 | 1.00th=[20317], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:57.718 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.718 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.718 | 99.00th=[25822], 99.50th=[30540], 99.90th=[43254], 99.95th=[43254], 00:38:57.718 | 99.99th=[43254] 00:38:57.718 bw ( KiB/s): min= 2436, max= 2688, per=4.10%, avg=2627.58, stdev=76.47, samples=19 00:38:57.718 iops : min= 609, max= 672, avg=656.89, stdev=19.12, samples=19 00:38:57.718 lat (msec) : 10=0.21%, 20=0.79%, 50=99.00% 00:38:57.718 cpu : usr=98.88%, sys=0.89%, ctx=14, majf=0, minf=44 00:38:57.718 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.718 issued rwts: total=6606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782888: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=661, BW=2647KiB/s (2711kB/s)(25.9MiB/10003msec) 00:38:57.719 slat (nsec): min=5598, max=62511, avg=16876.34, stdev=10444.50 00:38:57.719 clat (usec): min=4209, max=47305, avg=24030.69, stdev=2756.07 00:38:57.719 lat (usec): min=4215, max=47325, avg=24047.57, stdev=2757.08 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[14615], 5.00th=[20055], 10.00th=[23462], 20.00th=[23725], 00:38:57.719 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.719 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25822], 00:38:57.719 | 99.00th=[32637], 99.50th=[35914], 99.90th=[47449], 99.95th=[47449], 00:38:57.719 | 99.99th=[47449] 00:38:57.719 bw ( KiB/s): min= 2432, max= 2752, per=4.11%, avg=2633.68, stdev=95.58, samples=19 00:38:57.719 iops : min= 608, max= 688, avg=658.37, stdev=23.85, samples=19 00:38:57.719 lat (msec) : 10=0.48%, 20=4.47%, 50=95.05% 00:38:57.719 cpu : usr=98.26%, sys=1.15%, ctx=197, majf=0, minf=28 00:38:57.719 IO depths : 1=4.5%, 2=9.2%, 4=20.0%, 8=57.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782889: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=691, BW=2767KiB/s (2834kB/s)(27.0MiB/10003msec) 00:38:57.719 slat (nsec): min=5511, max=80503, avg=13264.52, stdev=10172.68 00:38:57.719 clat (usec): min=3186, max=58994, avg=23056.35, stdev=3591.06 00:38:57.719 lat (usec): min=3191, max=59018, avg=23069.61, stdev=3592.24 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[11338], 5.00th=[15926], 10.00th=[17171], 20.00th=[23200], 00:38:57.719 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.719 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.719 | 99.00th=[31327], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:38:57.719 | 99.99th=[58983] 00:38:57.719 bw ( KiB/s): min= 2533, max= 3104, per=4.31%, avg=2758.32, stdev=158.52, samples=19 00:38:57.719 iops : min= 633, max= 776, avg=689.53, stdev=39.64, samples=19 00:38:57.719 lat (msec) : 4=0.03%, 10=0.46%, 20=14.55%, 50=84.93%, 100=0.03% 00:38:57.719 cpu : usr=99.09%, sys=0.66%, ctx=34, majf=0, minf=35 00:38:57.719 IO depths : 1=0.7%, 2=2.6%, 4=8.6%, 8=73.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=90.8%, 8=6.7%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782890: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10008msec) 00:38:57.719 slat (nsec): min=5687, max=70192, avg=15606.28, stdev=11105.92 00:38:57.719 clat (usec): min=6719, max=33813, avg=23982.48, stdev=1863.59 00:38:57.719 lat (usec): min=6766, max=33835, avg=23998.09, stdev=1863.84 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[14877], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:38:57.719 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.719 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.719 | 99.00th=[25822], 99.50th=[31589], 99.90th=[33162], 99.95th=[33424], 00:38:57.719 | 99.99th=[33817] 00:38:57.719 bw ( KiB/s): min= 2554, max= 2944, per=4.14%, avg=2654.00, stdev=93.17, samples=19 00:38:57.719 iops : min= 638, max= 736, avg=663.47, stdev=23.32, samples=19 00:38:57.719 lat (msec) : 10=0.45%, 20=1.93%, 50=97.62% 00:38:57.719 cpu : usr=98.13%, sys=1.20%, ctx=150, majf=0, minf=47 00:38:57.719 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782891: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=679, BW=2716KiB/s (2782kB/s)(26.6MiB/10013msec) 00:38:57.719 slat (nsec): min=5673, max=84453, avg=13357.22, stdev=10869.51 00:38:57.719 clat (usec): min=11262, max=42129, avg=23473.39, stdev=3945.17 00:38:57.719 lat (usec): min=11268, max=42136, avg=23486.74, stdev=3945.88 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[14746], 5.00th=[16581], 10.00th=[18220], 20.00th=[20317], 00:38:57.719 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:38:57.719 | 70.00th=[24511], 80.00th=[24773], 90.00th=[27657], 95.00th=[29754], 00:38:57.719 | 99.00th=[36963], 99.50th=[38536], 99.90th=[40109], 99.95th=[42206], 00:38:57.719 | 99.99th=[42206] 00:38:57.719 bw ( KiB/s): min= 2432, max= 2976, per=4.22%, avg=2703.37, stdev=123.19, samples=19 00:38:57.719 iops : min= 608, max= 744, avg=675.79, stdev=30.77, samples=19 00:38:57.719 lat (msec) : 20=19.18%, 50=80.82% 00:38:57.719 cpu : usr=99.02%, sys=0.74%, ctx=13, majf=0, minf=81 00:38:57.719 IO depths : 1=1.3%, 2=2.6%, 4=7.6%, 8=75.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=89.9%, 8=6.8%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782892: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=659, BW=2639KiB/s (2702kB/s)(25.8MiB/10017msec) 00:38:57.719 slat (nsec): min=5698, max=75768, avg=21813.73, stdev=14104.49 00:38:57.719 clat (usec): min=13890, max=34559, avg=24046.75, stdev=1217.81 00:38:57.719 lat (usec): min=13908, max=34567, avg=24068.56, stdev=1217.53 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[17957], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:57.719 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.719 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.719 | 99.00th=[26084], 99.50th=[30540], 99.90th=[33162], 99.95th=[34341], 00:38:57.719 | 99.99th=[34341] 00:38:57.719 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2636.80, stdev=64.34, samples=20 00:38:57.719 iops : min= 640, max= 672, avg=659.20, stdev=16.08, samples=20 00:38:57.719 lat (msec) : 20=1.57%, 50=98.43% 00:38:57.719 cpu : usr=98.38%, sys=1.07%, ctx=231, majf=0, minf=31 00:38:57.719 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782893: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=662, BW=2650KiB/s (2714kB/s)(25.9MiB/10015msec) 00:38:57.719 slat (nsec): min=5692, max=82191, avg=18036.13, stdev=13340.85 00:38:57.719 clat (usec): min=9667, max=39358, avg=23992.92, stdev=1585.53 00:38:57.719 lat (usec): min=9680, max=39365, avg=24010.96, stdev=1585.59 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[15664], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:38:57.719 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:38:57.719 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.719 | 99.00th=[25822], 99.50th=[32113], 99.90th=[33817], 99.95th=[34341], 00:38:57.719 | 99.99th=[39584] 00:38:57.719 bw ( KiB/s): min= 2560, max= 2784, per=4.13%, avg=2648.00, stdev=69.55, samples=20 00:38:57.719 iops : min= 640, max= 696, avg=662.00, stdev=17.39, samples=20 00:38:57.719 lat (msec) : 10=0.09%, 20=2.11%, 50=97.80% 00:38:57.719 cpu : usr=97.45%, sys=1.58%, ctx=407, majf=0, minf=45 00:38:57.719 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 filename2: (groupid=0, jobs=1): err= 0: pid=1782894: Fri Dec 6 17:54:48 2024 00:38:57.719 read: IOPS=658, BW=2635KiB/s (2699kB/s)(25.8MiB/10005msec) 00:38:57.719 slat (nsec): min=5704, max=74634, avg=21554.94, stdev=12780.82 00:38:57.719 clat (usec): min=13099, max=45529, avg=24077.50, stdev=1384.60 00:38:57.719 lat (usec): min=13106, max=45552, avg=24099.06, stdev=1384.70 00:38:57.719 clat percentiles (usec): 00:38:57.719 | 1.00th=[17957], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:57.719 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:38:57.719 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:38:57.719 | 99.00th=[26346], 99.50th=[31589], 99.90th=[38011], 99.95th=[38011], 00:38:57.719 | 99.99th=[45351] 00:38:57.719 bw ( KiB/s): min= 2432, max= 2688, per=4.10%, avg=2627.37, stdev=78.31, samples=19 00:38:57.719 iops : min= 608, max= 672, avg=656.84, stdev=19.58, samples=19 00:38:57.719 lat (msec) : 20=1.30%, 50=98.70% 00:38:57.719 cpu : usr=98.82%, sys=0.92%, ctx=81, majf=0, minf=26 00:38:57.719 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:57.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:57.719 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:57.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:57.719 00:38:57.719 Run status group 0 (all jobs): 00:38:57.719 READ: bw=62.5MiB/s (65.6MB/s), 2634KiB/s-2828KiB/s (2698kB/s-2896kB/s), io=626MiB (657MB), run=10003-10020msec 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:57.719 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 bdev_null0 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 [2024-12-06 17:54:48.320946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 bdev_null1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:57.720 { 00:38:57.720 "params": { 00:38:57.720 "name": "Nvme$subsystem", 00:38:57.720 "trtype": "$TEST_TRANSPORT", 00:38:57.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:57.720 "adrfam": "ipv4", 00:38:57.720 "trsvcid": "$NVMF_PORT", 00:38:57.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:57.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:57.720 "hdgst": ${hdgst:-false}, 00:38:57.720 "ddgst": ${ddgst:-false} 00:38:57.720 }, 00:38:57.720 "method": "bdev_nvme_attach_controller" 00:38:57.720 } 00:38:57.720 EOF 00:38:57.720 )") 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:57.720 { 00:38:57.720 "params": { 00:38:57.720 "name": "Nvme$subsystem", 00:38:57.720 "trtype": "$TEST_TRANSPORT", 00:38:57.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:57.720 "adrfam": "ipv4", 00:38:57.720 "trsvcid": "$NVMF_PORT", 00:38:57.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:57.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:57.720 "hdgst": ${hdgst:-false}, 00:38:57.720 "ddgst": ${ddgst:-false} 00:38:57.720 }, 00:38:57.720 "method": "bdev_nvme_attach_controller" 00:38:57.720 } 00:38:57.720 EOF 00:38:57.720 )") 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:57.720 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:57.721 "params": { 00:38:57.721 "name": "Nvme0", 00:38:57.721 "trtype": "tcp", 00:38:57.721 "traddr": "10.0.0.2", 00:38:57.721 "adrfam": "ipv4", 00:38:57.721 "trsvcid": "4420", 00:38:57.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:57.721 "hdgst": false, 00:38:57.721 "ddgst": false 00:38:57.721 }, 00:38:57.721 "method": "bdev_nvme_attach_controller" 00:38:57.721 },{ 00:38:57.721 "params": { 00:38:57.721 "name": "Nvme1", 00:38:57.721 "trtype": "tcp", 00:38:57.721 "traddr": "10.0.0.2", 00:38:57.721 "adrfam": "ipv4", 00:38:57.721 "trsvcid": "4420", 00:38:57.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:57.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:57.721 "hdgst": false, 00:38:57.721 "ddgst": false 00:38:57.721 }, 00:38:57.721 "method": "bdev_nvme_attach_controller" 00:38:57.721 }' 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:57.721 17:54:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:57.721 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:57.721 ... 00:38:57.721 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:57.721 ... 00:38:57.721 fio-3.35 00:38:57.721 Starting 4 threads 00:39:03.030 00:39:03.030 filename0: (groupid=0, jobs=1): err= 0: pid=1783198: Fri Dec 6 17:54:54 2024 00:39:03.030 read: IOPS=2910, BW=22.7MiB/s (23.8MB/s)(114MiB/5003msec) 00:39:03.030 slat (nsec): min=5483, max=48364, avg=8712.28, stdev=2887.51 00:39:03.030 clat (usec): min=1384, max=44771, avg=2726.28, stdev=1007.93 00:39:03.030 lat (usec): min=1390, max=44813, avg=2734.99, stdev=1008.10 00:39:03.030 clat percentiles (usec): 00:39:03.030 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2638], 00:39:03.030 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:39:03.030 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:39:03.030 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 4293], 99.95th=[44827], 00:39:03.030 | 99.99th=[44827] 00:39:03.030 bw ( KiB/s): min=21200, max=23648, per=24.97%, avg=23260.44, stdev=776.28, samples=9 00:39:03.030 iops : min= 2650, max= 2956, avg=2907.56, stdev=97.03, samples=9 00:39:03.030 lat (msec) : 2=0.54%, 4=99.16%, 10=0.25%, 50=0.05% 00:39:03.030 cpu : usr=96.72%, sys=3.06%, ctx=6, majf=0, minf=56 00:39:03.030 IO depths : 1=0.1%, 2=0.2%, 4=69.6%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 issued rwts: total=14562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.030 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.030 filename0: (groupid=0, jobs=1): err= 0: pid=1783199: Fri Dec 6 17:54:54 2024 00:39:03.030 read: IOPS=2887, BW=22.6MiB/s (23.7MB/s)(113MiB/5004msec) 00:39:03.030 slat (nsec): min=5494, max=29449, avg=6278.78, stdev=2168.23 00:39:03.030 clat (usec): min=780, max=45193, avg=2752.73, stdev=1023.75 00:39:03.030 lat (usec): min=786, max=45219, avg=2759.01, stdev=1023.87 00:39:03.030 clat percentiles (usec): 00:39:03.030 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:39:03.030 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:39:03.030 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:39:03.030 | 99.00th=[ 3916], 99.50th=[ 4047], 99.90th=[ 6128], 99.95th=[45351], 00:39:03.030 | 99.99th=[45351] 00:39:03.030 bw ( KiB/s): min=20713, max=23456, per=24.81%, avg=23111.30, stdev=845.22, samples=10 00:39:03.030 iops : min= 2589, max= 2932, avg=2888.90, stdev=105.69, samples=10 00:39:03.030 lat (usec) : 1000=0.01% 00:39:03.030 lat (msec) : 2=0.19%, 4=99.18%, 10=0.56%, 50=0.06% 00:39:03.030 cpu : usr=96.24%, sys=3.56%, ctx=6, majf=0, minf=39 00:39:03.030 IO depths : 1=0.1%, 2=0.1%, 4=73.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 issued rwts: total=14450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.030 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.030 filename1: (groupid=0, jobs=1): err= 0: pid=1783200: Fri Dec 6 17:54:54 2024 00:39:03.030 read: IOPS=2925, BW=22.9MiB/s (24.0MB/s)(114MiB/5004msec) 00:39:03.030 slat (usec): min=5, max=104, avg= 6.94, stdev= 3.63 00:39:03.030 clat (usec): min=1228, max=6381, avg=2717.54, stdev=263.59 00:39:03.030 lat (usec): min=1237, max=6397, avg=2724.48, stdev=263.27 00:39:03.030 clat percentiles (usec): 00:39:03.030 | 1.00th=[ 1745], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:39:03.030 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:39:03.030 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2999], 00:39:03.030 | 99.00th=[ 3916], 99.50th=[ 4047], 99.90th=[ 4293], 99.95th=[ 6194], 00:39:03.030 | 99.99th=[ 6390] 00:39:03.030 bw ( KiB/s): min=23040, max=23920, per=25.13%, avg=23408.00, stdev=247.64, samples=10 00:39:03.030 iops : min= 2880, max= 2990, avg=2926.00, stdev=30.96, samples=10 00:39:03.030 lat (msec) : 2=1.36%, 4=98.00%, 10=0.64% 00:39:03.030 cpu : usr=96.42%, sys=3.38%, ctx=6, majf=0, minf=50 00:39:03.030 IO depths : 1=0.1%, 2=0.2%, 4=69.4%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 issued rwts: total=14638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.030 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.030 filename1: (groupid=0, jobs=1): err= 0: pid=1783201: Fri Dec 6 17:54:54 2024 00:39:03.030 read: IOPS=2921, BW=22.8MiB/s (23.9MB/s)(114MiB/5005msec) 00:39:03.030 slat (nsec): min=5474, max=61078, avg=6976.47, stdev=3458.96 00:39:03.030 clat (usec): min=1176, max=6213, avg=2719.72, stdev=274.42 00:39:03.030 lat (usec): min=1185, max=6232, avg=2726.69, stdev=274.15 00:39:03.030 clat percentiles (usec): 00:39:03.030 | 1.00th=[ 1762], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:39:03.030 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:39:03.030 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:39:03.030 | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 4686], 99.95th=[ 6128], 00:39:03.030 | 99.99th=[ 6194] 00:39:03.030 bw ( KiB/s): min=23232, max=23824, per=25.11%, avg=23385.60, stdev=179.47, samples=10 00:39:03.030 iops : min= 2904, max= 2978, avg=2923.20, stdev=22.43, samples=10 00:39:03.030 lat (msec) : 2=1.48%, 4=97.68%, 10=0.83% 00:39:03.030 cpu : usr=96.62%, sys=3.16%, ctx=5, majf=0, minf=45 00:39:03.030 IO depths : 1=0.1%, 2=0.2%, 4=71.1%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.030 issued rwts: total=14624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.030 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.030 00:39:03.030 Run status group 0 (all jobs): 00:39:03.030 READ: bw=91.0MiB/s (95.4MB/s), 22.6MiB/s-22.9MiB/s (23.7MB/s-24.0MB/s), io=455MiB (477MB), run=5003-5005msec 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.030 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.030 00:39:03.030 real 0m24.519s 00:39:03.030 user 5m14.836s 00:39:03.030 sys 0m4.841s 00:39:03.031 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.031 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.031 ************************************ 00:39:03.031 END TEST fio_dif_rand_params 00:39:03.031 ************************************ 00:39:03.031 17:54:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:03.031 17:54:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:03.031 17:54:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.031 17:54:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:03.031 ************************************ 00:39:03.031 START TEST fio_dif_digest 00:39:03.031 ************************************ 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.031 bdev_null0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.031 [2024-12-06 17:54:54.886415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:03.031 { 00:39:03.031 "params": { 00:39:03.031 "name": "Nvme$subsystem", 00:39:03.031 "trtype": "$TEST_TRANSPORT", 00:39:03.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:03.031 "adrfam": "ipv4", 00:39:03.031 "trsvcid": "$NVMF_PORT", 00:39:03.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:03.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:03.031 "hdgst": ${hdgst:-false}, 00:39:03.031 "ddgst": ${ddgst:-false} 00:39:03.031 }, 00:39:03.031 "method": "bdev_nvme_attach_controller" 00:39:03.031 } 00:39:03.031 EOF 00:39:03.031 )") 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:03.031 "params": { 00:39:03.031 "name": "Nvme0", 00:39:03.031 "trtype": "tcp", 00:39:03.031 "traddr": "10.0.0.2", 00:39:03.031 "adrfam": "ipv4", 00:39:03.031 "trsvcid": "4420", 00:39:03.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:03.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:03.031 "hdgst": true, 00:39:03.031 "ddgst": true 00:39:03.031 }, 00:39:03.031 "method": "bdev_nvme_attach_controller" 00:39:03.031 }' 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:03.031 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.292 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:03.292 ... 00:39:03.292 fio-3.35 00:39:03.292 Starting 3 threads 00:39:15.656 00:39:15.656 filename0: (groupid=0, jobs=1): err= 0: pid=1783466: Fri Dec 6 17:55:05 2024 00:39:15.656 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(400MiB/10047msec) 00:39:15.656 slat (nsec): min=5883, max=38140, avg=7869.79, stdev=1460.91 00:39:15.656 clat (usec): min=5827, max=51236, avg=9398.43, stdev=1615.43 00:39:15.656 lat (usec): min=5833, max=51242, avg=9406.30, stdev=1615.48 00:39:15.656 clat percentiles (usec): 00:39:15.656 | 1.00th=[ 6718], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 8160], 00:39:15.656 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:39:15.656 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:39:15.656 | 99.00th=[11994], 99.50th=[12256], 99.90th=[16909], 99.95th=[49021], 00:39:15.656 | 99.99th=[51119] 00:39:15.656 bw ( KiB/s): min=38144, max=43008, per=37.83%, avg=40921.60, stdev=1284.77, samples=20 00:39:15.656 iops : min= 298, max= 336, avg=319.70, stdev=10.04, samples=20 00:39:15.656 lat (msec) : 10=65.61%, 20=34.32%, 50=0.03%, 100=0.03% 00:39:15.656 cpu : usr=94.26%, sys=5.48%, ctx=22, majf=0, minf=86 00:39:15.656 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.656 issued rwts: total=3199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:15.656 filename0: (groupid=0, jobs=1): err= 0: pid=1783467: Fri Dec 6 17:55:05 2024 00:39:15.656 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(394MiB/10046msec) 00:39:15.656 slat (nsec): min=5981, max=46863, avg=8197.48, stdev=1488.57 00:39:15.656 clat (usec): min=6275, max=48817, avg=9535.24, stdev=1709.11 00:39:15.656 lat (usec): min=6281, max=48824, avg=9543.44, stdev=1709.21 00:39:15.656 clat percentiles (usec): 00:39:15.656 | 1.00th=[ 6652], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:39:15.656 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:39:15.656 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11469], 00:39:15.656 | 99.00th=[12387], 99.50th=[12911], 99.90th=[15926], 99.95th=[47449], 00:39:15.656 | 99.99th=[49021] 00:39:15.656 bw ( KiB/s): min=36864, max=43520, per=37.28%, avg=40332.80, stdev=1682.76, samples=20 00:39:15.656 iops : min= 288, max= 340, avg=315.10, stdev=13.15, samples=20 00:39:15.656 lat (msec) : 10=56.20%, 20=43.74%, 50=0.06% 00:39:15.656 cpu : usr=94.28%, sys=5.45%, ctx=78, majf=0, minf=249 00:39:15.656 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.656 issued rwts: total=3153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:15.656 filename0: (groupid=0, jobs=1): err= 0: pid=1783468: Fri Dec 6 17:55:05 2024 00:39:15.656 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10046msec) 00:39:15.656 slat (nsec): min=5909, max=31790, avg=7828.03, stdev=1637.85 00:39:15.656 clat (msec): min=7, max=133, avg=14.06, stdev=12.82 00:39:15.656 lat (msec): min=7, max=133, avg=14.07, stdev=12.82 00:39:15.656 clat percentiles (msec): 00:39:15.656 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:39:15.656 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:39:15.656 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 52], 00:39:15.656 | 99.00th=[ 54], 99.50th=[ 91], 99.90th=[ 93], 99.95th=[ 93], 00:39:15.656 | 99.99th=[ 134] 00:39:15.656 bw ( KiB/s): min=21504, max=33280, per=25.29%, avg=27353.60, stdev=3848.77, samples=20 00:39:15.656 iops : min= 168, max= 260, avg=213.70, stdev=30.07, samples=20 00:39:15.656 lat (msec) : 10=29.87%, 20=61.80%, 50=0.89%, 100=7.39%, 250=0.05% 00:39:15.656 cpu : usr=94.95%, sys=4.81%, ctx=21, majf=0, minf=109 00:39:15.656 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.656 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:15.656 00:39:15.656 Run status group 0 (all jobs): 00:39:15.656 READ: bw=106MiB/s (111MB/s), 26.6MiB/s-39.8MiB/s (27.9MB/s-41.7MB/s), io=1061MiB (1113MB), run=10046-10047msec 00:39:15.656 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:15.656 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:15.656 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.657 00:39:15.657 real 0m11.290s 00:39:15.657 user 0m43.352s 00:39:15.657 sys 0m1.901s 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:15.657 17:55:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:15.657 ************************************ 00:39:15.657 END TEST fio_dif_digest 00:39:15.657 ************************************ 00:39:15.657 17:55:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:15.657 17:55:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:15.657 rmmod nvme_tcp 00:39:15.657 rmmod nvme_fabrics 00:39:15.657 rmmod nvme_keyring 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1781286 ']' 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1781286 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1781286 ']' 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1781286 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1781286 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1781286' 00:39:15.657 killing process with pid 1781286 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1781286 00:39:15.657 17:55:06 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1781286 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:15.657 17:55:06 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:17.567 Waiting for block devices as requested 00:39:17.827 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:17.827 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:17.827 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:18.088 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:18.088 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:18.088 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:18.347 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:18.347 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:18.347 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:18.607 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:18.607 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:18.607 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:18.867 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:18.867 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:18.867 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:19.127 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:19.127 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:19.389 17:55:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.389 17:55:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:19.389 17:55:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.935 17:55:13 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:21.935 00:39:21.935 real 1m18.045s 00:39:21.935 user 8m2.285s 00:39:21.935 sys 0m21.837s 00:39:21.935 17:55:13 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.935 17:55:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:21.935 ************************************ 00:39:21.935 END TEST nvmf_dif 00:39:21.935 ************************************ 00:39:21.935 17:55:13 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:21.935 17:55:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:21.935 17:55:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.935 17:55:13 -- common/autotest_common.sh@10 -- # set +x 00:39:21.935 ************************************ 00:39:21.935 START TEST nvmf_abort_qd_sizes 00:39:21.935 ************************************ 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:21.935 * Looking for test storage... 00:39:21.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:21.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.935 --rc genhtml_branch_coverage=1 00:39:21.935 --rc genhtml_function_coverage=1 00:39:21.935 --rc genhtml_legend=1 00:39:21.935 --rc geninfo_all_blocks=1 00:39:21.935 --rc geninfo_unexecuted_blocks=1 00:39:21.935 00:39:21.935 ' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:21.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.935 --rc genhtml_branch_coverage=1 00:39:21.935 --rc genhtml_function_coverage=1 00:39:21.935 --rc genhtml_legend=1 00:39:21.935 --rc geninfo_all_blocks=1 00:39:21.935 --rc geninfo_unexecuted_blocks=1 00:39:21.935 00:39:21.935 ' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:21.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.935 --rc genhtml_branch_coverage=1 00:39:21.935 --rc genhtml_function_coverage=1 00:39:21.935 --rc genhtml_legend=1 00:39:21.935 --rc geninfo_all_blocks=1 00:39:21.935 --rc geninfo_unexecuted_blocks=1 00:39:21.935 00:39:21.935 ' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:21.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.935 --rc genhtml_branch_coverage=1 00:39:21.935 --rc genhtml_function_coverage=1 00:39:21.935 --rc genhtml_legend=1 00:39:21.935 --rc geninfo_all_blocks=1 00:39:21.935 --rc geninfo_unexecuted_blocks=1 00:39:21.935 00:39:21.935 ' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:21.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:21.935 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:21.936 17:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:30.085 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:30.085 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:30.085 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:30.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:30.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:30.086 17:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:30.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:30.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:39:30.086 00:39:30.086 --- 10.0.0.2 ping statistics --- 00:39:30.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.086 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:30.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:30.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:39:30.086 00:39:30.086 --- 10.0.0.1 ping statistics --- 00:39:30.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:30.086 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:30.086 17:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:32.642 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:32.643 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:32.906 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.906 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:32.906 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:32.906 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.906 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:32.906 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1787866 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1787866 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1787866 ']' 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:33.167 17:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:33.167 [2024-12-06 17:55:25.042675] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:39:33.167 [2024-12-06 17:55:25.042742] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:33.167 [2024-12-06 17:55:25.141192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:33.167 [2024-12-06 17:55:25.195401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:33.167 [2024-12-06 17:55:25.195457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:33.167 [2024-12-06 17:55:25.195466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:33.167 [2024-12-06 17:55:25.195473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:33.167 [2024-12-06 17:55:25.195479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:33.167 [2024-12-06 17:55:25.197540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.167 [2024-12-06 17:55:25.197703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:33.167 [2024-12-06 17:55:25.197791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:33.167 [2024-12-06 17:55:25.197924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.109 17:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:34.109 ************************************ 00:39:34.109 START TEST spdk_target_abort 00:39:34.109 ************************************ 00:39:34.109 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:34.109 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:34.109 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:34.109 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.109 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.370 spdk_targetn1 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.370 [2024-12-06 17:55:26.257841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:34.370 [2024-12-06 17:55:26.306144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:34.370 17:55:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:34.677 [2024-12-06 17:55:26.447492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.447527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.455068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:288 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.455088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.463143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:512 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.463161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.478515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1032 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.478535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.502623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1904 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.502648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.510121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2112 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.510139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.534206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2944 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.534226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:34.677 [2024-12-06 17:55:26.558139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3728 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:39:34.677 [2024-12-06 17:55:26.558159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d5 p:0 m:0 dnr:0 00:39:37.974 Initializing NVMe Controllers 00:39:37.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:37.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:37.974 Initialization complete. Launching workers. 00:39:37.974 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11517, failed: 8 00:39:37.974 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1334, failed to submit 10191 00:39:37.974 success 774, unsuccessful 560, failed 0 00:39:37.974 17:55:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:37.974 17:55:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:37.974 [2024-12-06 17:55:29.716884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:39:37.974 [2024-12-06 17:55:29.716921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:39:37.974 [2024-12-06 17:55:29.723533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:39:37.974 [2024-12-06 17:55:29.723556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:39:37.974 [2024-12-06 17:55:29.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:2296 len:8 PRP1 0x200004e42000 PRP2 0x0 00:39:37.974 [2024-12-06 17:55:29.802796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:39:37.974 [2024-12-06 17:55:29.810712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2488 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:39:37.974 [2024-12-06 17:55:29.810733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:39:37.974 [2024-12-06 17:55:29.866795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3880 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:39:37.974 [2024-12-06 17:55:29.866820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00e6 p:0 m:0 dnr:0 00:39:38.917 [2024-12-06 17:55:30.641524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:21496 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:39:38.917 [2024-12-06 17:55:30.641559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:39:40.303 [2024-12-06 17:55:32.252456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:58272 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:39:40.303 [2024-12-06 17:55:32.252486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0075 p:1 m:0 dnr:0 00:39:40.564 [2024-12-06 17:55:32.463379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:63144 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:39:40.564 [2024-12-06 17:55:32.463401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00d8 p:1 m:0 dnr:0 00:39:40.825 Initializing NVMe Controllers 00:39:40.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:40.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:40.825 Initialization complete. Launching workers. 00:39:40.825 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8603, failed: 8 00:39:40.825 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1177, failed to submit 7434 00:39:40.825 success 367, unsuccessful 810, failed 0 00:39:40.825 17:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:40.825 17:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:41.085 [2024-12-06 17:55:33.128745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:150 nsid:1 lba:3616 len:8 PRP1 0x200004aec000 PRP2 0x0 00:39:41.086 [2024-12-06 17:55:33.128769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:150 cdw0:0 sqhd:00d4 p:1 m:0 dnr:0 00:39:44.387 Initializing NVMe Controllers 00:39:44.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:44.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:44.387 Initialization complete. Launching workers. 00:39:44.387 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43357, failed: 1 00:39:44.387 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2679, failed to submit 40679 00:39:44.387 success 593, unsuccessful 2086, failed 0 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.387 17:55:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1787866 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1787866 ']' 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1787866 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:46.302 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1787866 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1787866' 00:39:46.302 killing process with pid 1787866 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1787866 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1787866 00:39:46.302 00:39:46.302 real 0m12.213s 00:39:46.302 user 0m49.741s 00:39:46.302 sys 0m1.990s 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.302 ************************************ 00:39:46.302 END TEST spdk_target_abort 00:39:46.302 ************************************ 00:39:46.302 17:55:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:46.302 17:55:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:46.302 17:55:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:46.302 17:55:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:46.302 ************************************ 00:39:46.302 START TEST kernel_target_abort 00:39:46.302 ************************************ 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:46.302 17:55:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:49.607 Waiting for block devices as requested 00:39:49.607 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:49.607 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:49.868 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:49.868 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:49.868 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:50.130 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:50.130 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:50.130 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:50.391 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:50.391 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:50.652 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:50.652 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:50.652 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:50.912 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:50.912 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:50.912 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:50.912 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:51.484 No valid GPT data, bailing 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:39:51.484 00:39:51.484 Discovery Log Number of Records 2, Generation counter 2 00:39:51.484 =====Discovery Log Entry 0====== 00:39:51.484 trtype: tcp 00:39:51.484 adrfam: ipv4 00:39:51.484 subtype: current discovery subsystem 00:39:51.484 treq: not specified, sq flow control disable supported 00:39:51.484 portid: 1 00:39:51.484 trsvcid: 4420 00:39:51.484 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:51.484 traddr: 10.0.0.1 00:39:51.484 eflags: none 00:39:51.484 sectype: none 00:39:51.484 =====Discovery Log Entry 1====== 00:39:51.484 trtype: tcp 00:39:51.484 adrfam: ipv4 00:39:51.484 subtype: nvme subsystem 00:39:51.484 treq: not specified, sq flow control disable supported 00:39:51.484 portid: 1 00:39:51.484 trsvcid: 4420 00:39:51.484 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:51.484 traddr: 10.0.0.1 00:39:51.484 eflags: none 00:39:51.484 sectype: none 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:51.484 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:51.485 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:51.485 17:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:54.839 Initializing NVMe Controllers 00:39:54.839 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:54.839 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:54.839 Initialization complete. Launching workers. 00:39:54.839 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68020, failed: 0 00:39:54.839 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68020, failed to submit 0 00:39:54.839 success 0, unsuccessful 68020, failed 0 00:39:54.839 17:55:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:54.840 17:55:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:58.140 Initializing NVMe Controllers 00:39:58.140 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:58.140 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:58.140 Initialization complete. Launching workers. 00:39:58.140 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 116608, failed: 0 00:39:58.140 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29362, failed to submit 87246 00:39:58.140 success 0, unsuccessful 29362, failed 0 00:39:58.140 17:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:58.140 17:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:01.440 Initializing NVMe Controllers 00:40:01.440 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:01.440 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:01.440 Initialization complete. Launching workers. 00:40:01.440 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146201, failed: 0 00:40:01.440 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36578, failed to submit 109623 00:40:01.440 success 0, unsuccessful 36578, failed 0 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:01.441 17:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:04.741 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:04.741 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:06.127 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:06.387 00:40:06.387 real 0m20.202s 00:40:06.387 user 0m10.101s 00:40:06.387 sys 0m5.777s 00:40:06.387 17:55:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.387 17:55:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:06.387 ************************************ 00:40:06.387 END TEST kernel_target_abort 00:40:06.387 ************************************ 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.647 rmmod nvme_tcp 00:40:06.647 rmmod nvme_fabrics 00:40:06.647 rmmod nvme_keyring 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1787866 ']' 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1787866 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1787866 ']' 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1787866 00:40:06.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1787866) - No such process 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1787866 is not found' 00:40:06.647 Process with pid 1787866 is not found 00:40:06.647 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:06.648 17:55:58 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:09.947 Waiting for block devices as requested 00:40:09.947 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:10.207 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:10.207 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:10.207 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:10.467 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:10.467 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:10.467 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:10.727 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:10.727 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:10.988 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:10.988 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:10.988 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:11.249 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:11.249 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:11.249 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:11.249 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:11.511 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:11.773 17:56:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.761 17:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:13.761 00:40:13.761 real 0m52.269s 00:40:13.761 user 1m5.168s 00:40:13.761 sys 0m18.915s 00:40:13.761 17:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:13.761 17:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:13.761 ************************************ 00:40:13.761 END TEST nvmf_abort_qd_sizes 00:40:13.761 ************************************ 00:40:14.036 17:56:05 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:14.036 17:56:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:14.036 17:56:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:14.036 17:56:05 -- common/autotest_common.sh@10 -- # set +x 00:40:14.036 ************************************ 00:40:14.036 START TEST keyring_file 00:40:14.036 ************************************ 00:40:14.036 17:56:05 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:14.036 * Looking for test storage... 00:40:14.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:14.036 17:56:05 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:14.036 17:56:05 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:40:14.036 17:56:05 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:14.036 17:56:06 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:14.036 17:56:06 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:14.037 17:56:06 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:14.037 17:56:06 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:14.037 17:56:06 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:14.037 17:56:06 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.037 --rc genhtml_branch_coverage=1 00:40:14.037 --rc genhtml_function_coverage=1 00:40:14.037 --rc genhtml_legend=1 00:40:14.037 --rc geninfo_all_blocks=1 00:40:14.037 --rc geninfo_unexecuted_blocks=1 00:40:14.037 00:40:14.037 ' 00:40:14.037 17:56:06 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.037 --rc genhtml_branch_coverage=1 00:40:14.037 --rc genhtml_function_coverage=1 00:40:14.037 --rc genhtml_legend=1 00:40:14.037 --rc geninfo_all_blocks=1 00:40:14.037 --rc geninfo_unexecuted_blocks=1 00:40:14.037 00:40:14.037 ' 00:40:14.037 17:56:06 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.037 --rc genhtml_branch_coverage=1 00:40:14.037 --rc genhtml_function_coverage=1 00:40:14.037 --rc genhtml_legend=1 00:40:14.037 --rc geninfo_all_blocks=1 00:40:14.037 --rc geninfo_unexecuted_blocks=1 00:40:14.037 00:40:14.037 ' 00:40:14.037 17:56:06 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.037 --rc genhtml_branch_coverage=1 00:40:14.037 --rc genhtml_function_coverage=1 00:40:14.037 --rc genhtml_legend=1 00:40:14.037 --rc geninfo_all_blocks=1 00:40:14.037 --rc geninfo_unexecuted_blocks=1 00:40:14.037 00:40:14.037 ' 00:40:14.037 17:56:06 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:14.037 17:56:06 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.037 17:56:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:14.298 17:56:06 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:14.298 17:56:06 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.298 17:56:06 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.298 17:56:06 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.298 17:56:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.298 17:56:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.298 17:56:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.298 17:56:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:14.298 17:56:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:14.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dRYCW7tu6n 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:14.298 17:56:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dRYCW7tu6n 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dRYCW7tu6n 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dRYCW7tu6n 00:40:14.298 17:56:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:14.298 17:56:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7aKXHgZD3P 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:14.299 17:56:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:14.299 17:56:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:14.299 17:56:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:14.299 17:56:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:14.299 17:56:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:14.299 17:56:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7aKXHgZD3P 00:40:14.299 17:56:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7aKXHgZD3P 00:40:14.299 17:56:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7aKXHgZD3P 00:40:14.299 17:56:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=1791169 00:40:14.299 17:56:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1791169 00:40:14.299 17:56:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:14.299 17:56:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1791169 ']' 00:40:14.299 17:56:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:14.299 17:56:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:14.299 17:56:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:14.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:14.299 17:56:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:14.299 17:56:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:14.299 [2024-12-06 17:56:06.295343] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:40:14.299 [2024-12-06 17:56:06.295420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791169 ] 00:40:14.558 [2024-12-06 17:56:06.386236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.558 [2024-12-06 17:56:06.439885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:15.129 17:56:07 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:15.129 [2024-12-06 17:56:07.098799] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:15.129 null0 00:40:15.129 [2024-12-06 17:56:07.130838] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:15.129 [2024-12-06 17:56:07.131377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.129 17:56:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:15.129 [2024-12-06 17:56:07.162905] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:15.129 request: 00:40:15.129 { 00:40:15.129 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:15.129 "secure_channel": false, 00:40:15.129 "listen_address": { 00:40:15.129 "trtype": "tcp", 00:40:15.129 "traddr": "127.0.0.1", 00:40:15.129 "trsvcid": "4420" 00:40:15.129 }, 00:40:15.129 "method": "nvmf_subsystem_add_listener", 00:40:15.129 "req_id": 1 00:40:15.129 } 00:40:15.129 Got JSON-RPC error response 00:40:15.129 response: 00:40:15.129 { 00:40:15.129 "code": -32602, 00:40:15.129 "message": "Invalid parameters" 00:40:15.129 } 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:15.129 17:56:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=1791186 00:40:15.129 17:56:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1791186 /var/tmp/bperf.sock 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1791186 ']' 00:40:15.129 17:56:07 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:15.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.129 17:56:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:15.390 [2024-12-06 17:56:07.224498] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:40:15.390 [2024-12-06 17:56:07.224559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791186 ] 00:40:15.390 [2024-12-06 17:56:07.315026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.390 [2024-12-06 17:56:07.366669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.956 17:56:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.956 17:56:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:15.956 17:56:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:15.956 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:16.214 17:56:08 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7aKXHgZD3P 00:40:16.214 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7aKXHgZD3P 00:40:16.473 17:56:08 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:16.473 17:56:08 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:16.473 17:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.473 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.473 17:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:16.473 17:56:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dRYCW7tu6n == \/\t\m\p\/\t\m\p\.\d\R\Y\C\W\7\t\u\6\n ]] 00:40:16.473 17:56:08 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:16.473 17:56:08 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:16.473 17:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.473 17:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:16.473 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.731 17:56:08 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.7aKXHgZD3P == \/\t\m\p\/\t\m\p\.\7\a\K\X\H\g\Z\D\3\P ]] 00:40:16.731 17:56:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:16.731 17:56:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:16.731 17:56:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:16.731 17:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.731 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.731 17:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:16.989 17:56:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:16.989 17:56:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:16.989 17:56:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:16.989 17:56:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:16.989 17:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.989 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.989 17:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:17.250 17:56:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:17.250 17:56:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:17.250 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:17.250 [2024-12-06 17:56:09.245738] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:17.510 nvme0n1 00:40:17.510 17:56:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:17.510 17:56:09 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:17.510 17:56:09 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.510 17:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:17.770 17:56:09 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:17.770 17:56:09 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:18.030 Running I/O for 1 seconds... 00:40:18.970 19718.00 IOPS, 77.02 MiB/s 00:40:18.970 Latency(us) 00:40:18.970 [2024-12-06T16:56:11.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.970 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:18.970 nvme0n1 : 1.00 19769.81 77.23 0.00 0.00 6462.87 3822.93 18022.40 00:40:18.970 [2024-12-06T16:56:11.036Z] =================================================================================================================== 00:40:18.970 [2024-12-06T16:56:11.036Z] Total : 19769.81 77.23 0.00 0.00 6462.87 3822.93 18022.40 00:40:18.970 { 00:40:18.970 "results": [ 00:40:18.970 { 00:40:18.970 "job": "nvme0n1", 00:40:18.970 "core_mask": "0x2", 00:40:18.970 "workload": "randrw", 00:40:18.970 "percentage": 50, 00:40:18.970 "status": "finished", 00:40:18.970 "queue_depth": 128, 00:40:18.970 "io_size": 4096, 00:40:18.970 "runtime": 1.003854, 00:40:18.970 "iops": 19769.807163193054, 00:40:18.970 "mibps": 77.22580923122287, 00:40:18.970 "io_failed": 0, 00:40:18.970 "io_timeout": 0, 00:40:18.970 "avg_latency_us": 6462.865117404011, 00:40:18.970 "min_latency_us": 3822.9333333333334, 00:40:18.970 "max_latency_us": 18022.4 00:40:18.970 } 00:40:18.970 ], 00:40:18.970 "core_count": 1 00:40:18.970 } 00:40:18.970 17:56:10 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:18.970 17:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:19.232 17:56:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:19.232 17:56:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:19.232 17:56:11 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:19.232 17:56:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:19.493 17:56:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:19.493 17:56:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:19.493 17:56:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:19.493 17:56:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:19.755 [2024-12-06 17:56:11.563677] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:19.755 [2024-12-06 17:56:11.563753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4870 (107): Transport endpoint is not connected 00:40:19.755 [2024-12-06 17:56:11.564748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda4870 (9): Bad file descriptor 00:40:19.755 [2024-12-06 17:56:11.565750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:19.755 [2024-12-06 17:56:11.565757] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:19.755 [2024-12-06 17:56:11.565763] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:19.755 [2024-12-06 17:56:11.565770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:19.755 request: 00:40:19.755 { 00:40:19.755 "name": "nvme0", 00:40:19.755 "trtype": "tcp", 00:40:19.755 "traddr": "127.0.0.1", 00:40:19.755 "adrfam": "ipv4", 00:40:19.755 "trsvcid": "4420", 00:40:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:19.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:19.755 "prchk_reftag": false, 00:40:19.755 "prchk_guard": false, 00:40:19.755 "hdgst": false, 00:40:19.755 "ddgst": false, 00:40:19.755 "psk": "key1", 00:40:19.755 "allow_unrecognized_csi": false, 00:40:19.755 "method": "bdev_nvme_attach_controller", 00:40:19.755 "req_id": 1 00:40:19.755 } 00:40:19.755 Got JSON-RPC error response 00:40:19.755 response: 00:40:19.755 { 00:40:19.755 "code": -5, 00:40:19.755 "message": "Input/output error" 00:40:19.755 } 00:40:19.755 17:56:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:19.755 17:56:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:19.755 17:56:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:19.755 17:56:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:19.755 17:56:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:19.755 17:56:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:19.755 17:56:11 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:19.755 17:56:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:20.015 17:56:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:20.015 17:56:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:20.015 17:56:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:20.275 17:56:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:20.275 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:20.275 17:56:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:20.275 17:56:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:20.275 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:20.535 17:56:12 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:20.535 17:56:12 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.dRYCW7tu6n 00:40:20.535 17:56:12 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.535 17:56:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:20.535 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:20.793 [2024-12-06 17:56:12.646534] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dRYCW7tu6n': 0100660 00:40:20.793 [2024-12-06 17:56:12.646555] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:20.793 request: 00:40:20.793 { 00:40:20.793 "name": "key0", 00:40:20.793 "path": "/tmp/tmp.dRYCW7tu6n", 00:40:20.793 "method": "keyring_file_add_key", 00:40:20.793 "req_id": 1 00:40:20.793 } 00:40:20.793 Got JSON-RPC error response 00:40:20.793 response: 00:40:20.793 { 00:40:20.793 "code": -1, 00:40:20.793 "message": "Operation not permitted" 00:40:20.793 } 00:40:20.793 17:56:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:20.793 17:56:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:20.793 17:56:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:20.793 17:56:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:20.793 17:56:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.dRYCW7tu6n 00:40:20.793 17:56:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:20.793 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dRYCW7tu6n 00:40:20.793 17:56:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.dRYCW7tu6n 00:40:20.793 17:56:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:20.793 17:56:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:20.793 17:56:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:20.793 17:56:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:20.793 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:20.793 17:56:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.052 17:56:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:21.052 17:56:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:21.052 17:56:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:21.052 17:56:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:21.311 [2024-12-06 17:56:13.151818] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dRYCW7tu6n': No such file or directory 00:40:21.311 [2024-12-06 17:56:13.151833] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:21.311 [2024-12-06 17:56:13.151846] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:21.311 [2024-12-06 17:56:13.151852] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:21.311 [2024-12-06 17:56:13.151863] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:21.311 [2024-12-06 17:56:13.151868] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:21.311 request: 00:40:21.311 { 00:40:21.311 "name": "nvme0", 00:40:21.311 "trtype": "tcp", 00:40:21.311 "traddr": "127.0.0.1", 00:40:21.311 "adrfam": "ipv4", 00:40:21.311 "trsvcid": "4420", 00:40:21.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:21.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:21.311 "prchk_reftag": false, 00:40:21.311 "prchk_guard": false, 00:40:21.311 "hdgst": false, 00:40:21.311 "ddgst": false, 00:40:21.311 "psk": "key0", 00:40:21.311 "allow_unrecognized_csi": false, 00:40:21.311 "method": "bdev_nvme_attach_controller", 00:40:21.311 "req_id": 1 00:40:21.311 } 00:40:21.311 Got JSON-RPC error response 00:40:21.311 response: 00:40:21.311 { 00:40:21.311 "code": -19, 00:40:21.311 "message": "No such device" 00:40:21.311 } 00:40:21.311 17:56:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:21.311 17:56:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:21.311 17:56:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:21.311 17:56:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:21.311 17:56:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:21.311 17:56:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JuTeFKnRP6 00:40:21.311 17:56:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:21.311 17:56:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:21.311 17:56:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:21.311 17:56:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:21.311 17:56:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:21.311 17:56:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:21.311 17:56:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:21.571 17:56:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JuTeFKnRP6 00:40:21.571 17:56:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JuTeFKnRP6 00:40:21.571 17:56:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.JuTeFKnRP6 00:40:21.571 17:56:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JuTeFKnRP6 00:40:21.571 17:56:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JuTeFKnRP6 00:40:21.571 17:56:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:21.571 17:56:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:21.830 nvme0n1 00:40:21.830 17:56:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:21.830 17:56:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:21.830 17:56:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.830 17:56:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.830 17:56:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.830 17:56:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.090 17:56:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:22.090 17:56:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:22.090 17:56:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:22.351 17:56:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:22.351 17:56:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.351 17:56:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:22.351 17:56:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.351 17:56:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.611 17:56:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:22.611 17:56:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:22.611 17:56:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:22.871 17:56:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:22.871 17:56:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:22.871 17:56:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.871 17:56:14 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:22.871 17:56:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JuTeFKnRP6 00:40:22.871 17:56:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JuTeFKnRP6 00:40:23.129 17:56:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7aKXHgZD3P 00:40:23.129 17:56:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7aKXHgZD3P 00:40:23.387 17:56:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:23.387 17:56:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:23.387 nvme0n1 00:40:23.646 17:56:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:23.646 17:56:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:23.905 17:56:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:23.905 "subsystems": [ 00:40:23.905 { 00:40:23.905 "subsystem": "keyring", 00:40:23.905 "config": [ 00:40:23.905 { 00:40:23.905 "method": "keyring_file_add_key", 00:40:23.905 "params": { 00:40:23.905 "name": "key0", 00:40:23.905 "path": "/tmp/tmp.JuTeFKnRP6" 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "keyring_file_add_key", 00:40:23.905 "params": { 00:40:23.905 "name": "key1", 00:40:23.905 "path": "/tmp/tmp.7aKXHgZD3P" 00:40:23.905 } 00:40:23.905 } 00:40:23.905 ] 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "subsystem": "iobuf", 00:40:23.905 "config": [ 00:40:23.905 { 00:40:23.905 "method": "iobuf_set_options", 00:40:23.905 "params": { 00:40:23.905 "small_pool_count": 8192, 00:40:23.905 "large_pool_count": 1024, 00:40:23.905 "small_bufsize": 8192, 00:40:23.905 "large_bufsize": 135168, 00:40:23.905 "enable_numa": false 00:40:23.905 } 00:40:23.905 } 00:40:23.905 ] 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "subsystem": "sock", 00:40:23.905 "config": [ 00:40:23.905 { 00:40:23.905 "method": "sock_set_default_impl", 00:40:23.905 "params": { 00:40:23.905 "impl_name": "posix" 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "sock_impl_set_options", 00:40:23.905 "params": { 00:40:23.905 "impl_name": "ssl", 00:40:23.905 "recv_buf_size": 4096, 00:40:23.905 "send_buf_size": 4096, 00:40:23.905 "enable_recv_pipe": true, 00:40:23.905 "enable_quickack": false, 00:40:23.905 "enable_placement_id": 0, 00:40:23.905 "enable_zerocopy_send_server": true, 00:40:23.905 "enable_zerocopy_send_client": false, 00:40:23.905 "zerocopy_threshold": 0, 00:40:23.905 "tls_version": 0, 00:40:23.905 "enable_ktls": false 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "sock_impl_set_options", 00:40:23.905 "params": { 00:40:23.905 "impl_name": "posix", 00:40:23.905 "recv_buf_size": 2097152, 00:40:23.905 "send_buf_size": 2097152, 00:40:23.905 "enable_recv_pipe": true, 00:40:23.905 "enable_quickack": false, 00:40:23.905 "enable_placement_id": 0, 00:40:23.905 "enable_zerocopy_send_server": true, 00:40:23.905 "enable_zerocopy_send_client": false, 00:40:23.905 "zerocopy_threshold": 0, 00:40:23.905 "tls_version": 0, 00:40:23.905 "enable_ktls": false 00:40:23.905 } 00:40:23.905 } 00:40:23.905 ] 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "subsystem": "vmd", 00:40:23.905 "config": [] 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "subsystem": "accel", 00:40:23.905 "config": [ 00:40:23.905 { 00:40:23.905 "method": "accel_set_options", 00:40:23.905 "params": { 00:40:23.905 "small_cache_size": 128, 00:40:23.905 "large_cache_size": 16, 00:40:23.905 "task_count": 2048, 00:40:23.905 "sequence_count": 2048, 00:40:23.905 "buf_count": 2048 00:40:23.905 } 00:40:23.905 } 00:40:23.905 ] 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "subsystem": "bdev", 00:40:23.905 "config": [ 00:40:23.905 { 00:40:23.905 "method": "bdev_set_options", 00:40:23.905 "params": { 00:40:23.905 "bdev_io_pool_size": 65535, 00:40:23.905 "bdev_io_cache_size": 256, 00:40:23.905 "bdev_auto_examine": true, 00:40:23.905 "iobuf_small_cache_size": 128, 00:40:23.905 "iobuf_large_cache_size": 16 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "bdev_raid_set_options", 00:40:23.905 "params": { 00:40:23.905 "process_window_size_kb": 1024, 00:40:23.905 "process_max_bandwidth_mb_sec": 0 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "bdev_iscsi_set_options", 00:40:23.905 "params": { 00:40:23.905 "timeout_sec": 30 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "bdev_nvme_set_options", 00:40:23.905 "params": { 00:40:23.905 "action_on_timeout": "none", 00:40:23.905 "timeout_us": 0, 00:40:23.905 "timeout_admin_us": 0, 00:40:23.905 "keep_alive_timeout_ms": 10000, 00:40:23.905 "arbitration_burst": 0, 00:40:23.905 "low_priority_weight": 0, 00:40:23.905 "medium_priority_weight": 0, 00:40:23.905 "high_priority_weight": 0, 00:40:23.905 "nvme_adminq_poll_period_us": 10000, 00:40:23.905 "nvme_ioq_poll_period_us": 0, 00:40:23.905 "io_queue_requests": 512, 00:40:23.905 "delay_cmd_submit": true, 00:40:23.905 "transport_retry_count": 4, 00:40:23.905 "bdev_retry_count": 3, 00:40:23.905 "transport_ack_timeout": 0, 00:40:23.905 "ctrlr_loss_timeout_sec": 0, 00:40:23.905 "reconnect_delay_sec": 0, 00:40:23.905 "fast_io_fail_timeout_sec": 0, 00:40:23.905 "disable_auto_failback": false, 00:40:23.905 "generate_uuids": false, 00:40:23.905 "transport_tos": 0, 00:40:23.905 "nvme_error_stat": false, 00:40:23.905 "rdma_srq_size": 0, 00:40:23.905 "io_path_stat": false, 00:40:23.905 "allow_accel_sequence": false, 00:40:23.905 "rdma_max_cq_size": 0, 00:40:23.905 "rdma_cm_event_timeout_ms": 0, 00:40:23.905 "dhchap_digests": [ 00:40:23.905 "sha256", 00:40:23.905 "sha384", 00:40:23.905 "sha512" 00:40:23.905 ], 00:40:23.905 "dhchap_dhgroups": [ 00:40:23.905 "null", 00:40:23.905 "ffdhe2048", 00:40:23.905 "ffdhe3072", 00:40:23.905 "ffdhe4096", 00:40:23.905 "ffdhe6144", 00:40:23.905 "ffdhe8192" 00:40:23.905 ] 00:40:23.905 } 00:40:23.905 }, 00:40:23.905 { 00:40:23.905 "method": "bdev_nvme_attach_controller", 00:40:23.905 "params": { 00:40:23.905 "name": "nvme0", 00:40:23.905 "trtype": "TCP", 00:40:23.905 "adrfam": "IPv4", 00:40:23.905 "traddr": "127.0.0.1", 00:40:23.905 "trsvcid": "4420", 00:40:23.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:23.906 "prchk_reftag": false, 00:40:23.906 "prchk_guard": false, 00:40:23.906 "ctrlr_loss_timeout_sec": 0, 00:40:23.906 "reconnect_delay_sec": 0, 00:40:23.906 "fast_io_fail_timeout_sec": 0, 00:40:23.906 "psk": "key0", 00:40:23.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:23.906 "hdgst": false, 00:40:23.906 "ddgst": false, 00:40:23.906 "multipath": "multipath" 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "bdev_nvme_set_hotplug", 00:40:23.906 "params": { 00:40:23.906 "period_us": 100000, 00:40:23.906 "enable": false 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "bdev_wait_for_examine" 00:40:23.906 } 00:40:23.906 ] 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "subsystem": "nbd", 00:40:23.906 "config": [] 00:40:23.906 } 00:40:23.906 ] 00:40:23.906 }' 00:40:23.906 17:56:15 keyring_file -- keyring/file.sh@115 -- # killprocess 1791186 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1791186 ']' 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1791186 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791186 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791186' 00:40:23.906 killing process with pid 1791186 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@973 -- # kill 1791186 00:40:23.906 Received shutdown signal, test time was about 1.000000 seconds 00:40:23.906 00:40:23.906 Latency(us) 00:40:23.906 [2024-12-06T16:56:15.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.906 [2024-12-06T16:56:15.972Z] =================================================================================================================== 00:40:23.906 [2024-12-06T16:56:15.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@978 -- # wait 1791186 00:40:23.906 17:56:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=1791416 00:40:23.906 17:56:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1791416 /var/tmp/bperf.sock 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1791416 ']' 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:23.906 17:56:15 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:23.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:23.906 17:56:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:23.906 17:56:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:23.906 "subsystems": [ 00:40:23.906 { 00:40:23.906 "subsystem": "keyring", 00:40:23.906 "config": [ 00:40:23.906 { 00:40:23.906 "method": "keyring_file_add_key", 00:40:23.906 "params": { 00:40:23.906 "name": "key0", 00:40:23.906 "path": "/tmp/tmp.JuTeFKnRP6" 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "keyring_file_add_key", 00:40:23.906 "params": { 00:40:23.906 "name": "key1", 00:40:23.906 "path": "/tmp/tmp.7aKXHgZD3P" 00:40:23.906 } 00:40:23.906 } 00:40:23.906 ] 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "subsystem": "iobuf", 00:40:23.906 "config": [ 00:40:23.906 { 00:40:23.906 "method": "iobuf_set_options", 00:40:23.906 "params": { 00:40:23.906 "small_pool_count": 8192, 00:40:23.906 "large_pool_count": 1024, 00:40:23.906 "small_bufsize": 8192, 00:40:23.906 "large_bufsize": 135168, 00:40:23.906 "enable_numa": false 00:40:23.906 } 00:40:23.906 } 00:40:23.906 ] 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "subsystem": "sock", 00:40:23.906 "config": [ 00:40:23.906 { 00:40:23.906 "method": "sock_set_default_impl", 00:40:23.906 "params": { 00:40:23.906 "impl_name": "posix" 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "sock_impl_set_options", 00:40:23.906 "params": { 00:40:23.906 "impl_name": "ssl", 00:40:23.906 "recv_buf_size": 4096, 00:40:23.906 "send_buf_size": 4096, 00:40:23.906 "enable_recv_pipe": true, 00:40:23.906 "enable_quickack": false, 00:40:23.906 "enable_placement_id": 0, 00:40:23.906 "enable_zerocopy_send_server": true, 00:40:23.906 "enable_zerocopy_send_client": false, 00:40:23.906 "zerocopy_threshold": 0, 00:40:23.906 "tls_version": 0, 00:40:23.906 "enable_ktls": false 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "sock_impl_set_options", 00:40:23.906 "params": { 00:40:23.906 "impl_name": "posix", 00:40:23.906 "recv_buf_size": 2097152, 00:40:23.906 "send_buf_size": 2097152, 00:40:23.906 "enable_recv_pipe": true, 00:40:23.906 "enable_quickack": false, 00:40:23.906 "enable_placement_id": 0, 00:40:23.906 "enable_zerocopy_send_server": true, 00:40:23.906 "enable_zerocopy_send_client": false, 00:40:23.906 "zerocopy_threshold": 0, 00:40:23.906 "tls_version": 0, 00:40:23.906 "enable_ktls": false 00:40:23.906 } 00:40:23.906 } 00:40:23.906 ] 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "subsystem": "vmd", 00:40:23.906 "config": [] 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "subsystem": "accel", 00:40:23.906 "config": [ 00:40:23.906 { 00:40:23.906 "method": "accel_set_options", 00:40:23.906 "params": { 00:40:23.906 "small_cache_size": 128, 00:40:23.906 "large_cache_size": 16, 00:40:23.906 "task_count": 2048, 00:40:23.906 "sequence_count": 2048, 00:40:23.906 "buf_count": 2048 00:40:23.906 } 00:40:23.906 } 00:40:23.906 ] 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "subsystem": "bdev", 00:40:23.906 "config": [ 00:40:23.906 { 00:40:23.906 "method": "bdev_set_options", 00:40:23.906 "params": { 00:40:23.906 "bdev_io_pool_size": 65535, 00:40:23.906 "bdev_io_cache_size": 256, 00:40:23.906 "bdev_auto_examine": true, 00:40:23.906 "iobuf_small_cache_size": 128, 00:40:23.906 "iobuf_large_cache_size": 16 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "bdev_raid_set_options", 00:40:23.906 "params": { 00:40:23.906 "process_window_size_kb": 1024, 00:40:23.906 "process_max_bandwidth_mb_sec": 0 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "bdev_iscsi_set_options", 00:40:23.906 "params": { 00:40:23.906 "timeout_sec": 30 00:40:23.906 } 00:40:23.906 }, 00:40:23.906 { 00:40:23.906 "method": "bdev_nvme_set_options", 00:40:23.906 "params": { 00:40:23.906 "action_on_timeout": "none", 00:40:23.906 "timeout_us": 0, 00:40:23.906 "timeout_admin_us": 0, 00:40:23.906 "keep_alive_timeout_ms": 10000, 00:40:23.906 "arbitration_burst": 0, 00:40:23.906 "low_priority_weight": 0, 00:40:23.906 "medium_priority_weight": 0, 00:40:23.906 "high_priority_weight": 0, 00:40:23.906 "nvme_adminq_poll_period_us": 10000, 00:40:23.906 "nvme_ioq_poll_period_us": 0, 00:40:23.906 "io_queue_requests": 512, 00:40:23.906 "delay_cmd_submit": true, 00:40:23.906 "transport_retry_count": 4, 00:40:23.906 "bdev_retry_count": 3, 00:40:23.906 "transport_ack_timeout": 0, 00:40:23.906 "ctrlr_loss_timeout_sec": 0, 00:40:23.906 "reconnect_delay_sec": 0, 00:40:23.906 "fast_io_fail_timeout_sec": 0, 00:40:23.906 "disable_auto_failback": false, 00:40:23.906 "generate_uuids": false, 00:40:23.906 "transport_tos": 0, 00:40:23.906 "nvme_error_stat": false, 00:40:23.906 "rdma_srq_size": 0, 00:40:23.906 "io_path_stat": false, 00:40:23.906 "allow_accel_sequence": false, 00:40:23.907 "rdma_max_cq_size": 0, 00:40:23.907 "rdma_cm_event_timeout_ms": 0, 00:40:23.907 "dhchap_digests": [ 00:40:23.907 "sha256", 00:40:23.907 "sha384", 00:40:23.907 "sha512" 00:40:23.907 ], 00:40:23.907 "dhchap_dhgroups": [ 00:40:23.907 "null", 00:40:23.907 "ffdhe2048", 00:40:23.907 "ffdhe3072", 00:40:23.907 "ffdhe4096", 00:40:23.907 "ffdhe6144", 00:40:23.907 "ffdhe8192" 00:40:23.907 ] 00:40:23.907 } 00:40:23.907 }, 00:40:23.907 { 00:40:23.907 "method": "bdev_nvme_attach_controller", 00:40:23.907 "params": { 00:40:23.907 "name": "nvme0", 00:40:23.907 "trtype": "TCP", 00:40:23.907 "adrfam": "IPv4", 00:40:23.907 "traddr": "127.0.0.1", 00:40:23.907 "trsvcid": "4420", 00:40:23.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:23.907 "prchk_reftag": false, 00:40:23.907 "prchk_guard": false, 00:40:23.907 "ctrlr_loss_timeout_sec": 0, 00:40:23.907 "reconnect_delay_sec": 0, 00:40:23.907 "fast_io_fail_timeout_sec": 0, 00:40:23.907 "psk": "key0", 00:40:23.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:23.907 "hdgst": false, 00:40:23.907 "ddgst": false, 00:40:23.907 "multipath": "multipath" 00:40:23.907 } 00:40:23.907 }, 00:40:23.907 { 00:40:23.907 "method": "bdev_nvme_set_hotplug", 00:40:23.907 "params": { 00:40:23.907 "period_us": 100000, 00:40:23.907 "enable": false 00:40:23.907 } 00:40:23.907 }, 00:40:23.907 { 00:40:23.907 "method": "bdev_wait_for_examine" 00:40:23.907 } 00:40:23.907 ] 00:40:23.907 }, 00:40:23.907 { 00:40:23.907 "subsystem": "nbd", 00:40:23.907 "config": [] 00:40:23.907 } 00:40:23.907 ] 00:40:23.907 }' 00:40:23.907 [2024-12-06 17:56:15.931225] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:40:23.907 [2024-12-06 17:56:15.931283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791416 ] 00:40:24.166 [2024-12-06 17:56:16.014861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.166 [2024-12-06 17:56:16.043730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.166 [2024-12-06 17:56:16.187507] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:24.735 17:56:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.735 17:56:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:24.735 17:56:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:24.735 17:56:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:24.735 17:56:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:24.995 17:56:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:24.995 17:56:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:24.995 17:56:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:24.995 17:56:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:24.995 17:56:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:24.995 17:56:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:24.995 17:56:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.255 17:56:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:25.255 17:56:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:25.255 17:56:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.255 17:56:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.255 17:56:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.255 17:56:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.255 17:56:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:25.255 17:56:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:25.255 17:56:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:25.255 17:56:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:25.255 17:56:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:25.515 17:56:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:25.515 17:56:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:25.515 17:56:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JuTeFKnRP6 /tmp/tmp.7aKXHgZD3P 00:40:25.515 17:56:17 keyring_file -- keyring/file.sh@20 -- # killprocess 1791416 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1791416 ']' 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1791416 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791416 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791416' 00:40:25.515 killing process with pid 1791416 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@973 -- # kill 1791416 00:40:25.515 Received shutdown signal, test time was about 1.000000 seconds 00:40:25.515 00:40:25.515 Latency(us) 00:40:25.515 [2024-12-06T16:56:17.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:25.515 [2024-12-06T16:56:17.581Z] =================================================================================================================== 00:40:25.515 [2024-12-06T16:56:17.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:25.515 17:56:17 keyring_file -- common/autotest_common.sh@978 -- # wait 1791416 00:40:25.775 17:56:17 keyring_file -- keyring/file.sh@21 -- # killprocess 1791169 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1791169 ']' 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1791169 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791169 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791169' 00:40:25.775 killing process with pid 1791169 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@973 -- # kill 1791169 00:40:25.775 17:56:17 keyring_file -- common/autotest_common.sh@978 -- # wait 1791169 00:40:26.033 00:40:26.033 real 0m12.024s 00:40:26.033 user 0m29.104s 00:40:26.033 sys 0m2.678s 00:40:26.033 17:56:17 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:26.033 17:56:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:26.033 ************************************ 00:40:26.033 END TEST keyring_file 00:40:26.033 ************************************ 00:40:26.033 17:56:17 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:26.033 17:56:17 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:26.033 17:56:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:26.033 17:56:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:26.033 17:56:17 -- common/autotest_common.sh@10 -- # set +x 00:40:26.033 ************************************ 00:40:26.033 START TEST keyring_linux 00:40:26.033 ************************************ 00:40:26.033 17:56:17 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:26.033 Joined session keyring: 440651273 00:40:26.033 * Looking for test storage... 00:40:26.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:26.033 17:56:18 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:26.033 17:56:18 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:40:26.033 17:56:18 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:26.293 17:56:18 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:26.293 17:56:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:26.293 17:56:18 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:26.293 17:56:18 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:26.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.293 --rc genhtml_branch_coverage=1 00:40:26.293 --rc genhtml_function_coverage=1 00:40:26.293 --rc genhtml_legend=1 00:40:26.293 --rc geninfo_all_blocks=1 00:40:26.293 --rc geninfo_unexecuted_blocks=1 00:40:26.293 00:40:26.293 ' 00:40:26.293 17:56:18 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:26.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.293 --rc genhtml_branch_coverage=1 00:40:26.293 --rc genhtml_function_coverage=1 00:40:26.293 --rc genhtml_legend=1 00:40:26.293 --rc geninfo_all_blocks=1 00:40:26.293 --rc geninfo_unexecuted_blocks=1 00:40:26.293 00:40:26.293 ' 00:40:26.293 17:56:18 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:26.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.293 --rc genhtml_branch_coverage=1 00:40:26.293 --rc genhtml_function_coverage=1 00:40:26.293 --rc genhtml_legend=1 00:40:26.293 --rc geninfo_all_blocks=1 00:40:26.293 --rc geninfo_unexecuted_blocks=1 00:40:26.293 00:40:26.293 ' 00:40:26.293 17:56:18 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:26.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.293 --rc genhtml_branch_coverage=1 00:40:26.293 --rc genhtml_function_coverage=1 00:40:26.293 --rc genhtml_legend=1 00:40:26.293 --rc geninfo_all_blocks=1 00:40:26.293 --rc geninfo_unexecuted_blocks=1 00:40:26.293 00:40:26.293 ' 00:40:26.293 17:56:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:26.293 17:56:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:26.293 17:56:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:26.293 17:56:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:26.293 17:56:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:26.294 17:56:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:26.294 17:56:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.294 17:56:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.294 17:56:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.294 17:56:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.294 17:56:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.294 17:56:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.294 17:56:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:26.294 17:56:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:26.294 /tmp/:spdk-test:key0 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:26.294 17:56:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:26.294 17:56:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:26.294 /tmp/:spdk-test:key1 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1791545 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1791545 00:40:26.294 17:56:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:26.294 17:56:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1791545 ']' 00:40:26.294 17:56:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.294 17:56:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:26.294 17:56:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.294 17:56:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:26.294 17:56:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:26.294 [2024-12-06 17:56:18.352684] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:40:26.294 [2024-12-06 17:56:18.352742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791545 ] 00:40:26.554 [2024-12-06 17:56:18.435665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.554 [2024-12-06 17:56:18.465667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.124 17:56:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:27.124 17:56:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:27.124 17:56:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:27.124 17:56:19 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.124 17:56:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:27.124 [2024-12-06 17:56:19.140309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.124 null0 00:40:27.124 [2024-12-06 17:56:19.172363] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:27.124 [2024-12-06 17:56:19.172725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.384 17:56:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:27.384 820414494 00:40:27.384 17:56:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:27.384 200570647 00:40:27.384 17:56:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1791554 00:40:27.384 17:56:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1791554 /var/tmp/bperf.sock 00:40:27.384 17:56:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1791554 ']' 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:27.384 17:56:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:27.384 [2024-12-06 17:56:19.248515] Starting SPDK v25.01-pre git sha1 99034762d / DPDK 24.03.0 initialization... 00:40:27.385 [2024-12-06 17:56:19.248562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791554 ] 00:40:27.385 [2024-12-06 17:56:19.330242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.385 [2024-12-06 17:56:19.359809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.324 17:56:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:28.324 17:56:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:28.324 17:56:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:28.324 17:56:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:28.324 17:56:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:28.324 17:56:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:28.582 17:56:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:28.582 17:56:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:28.582 [2024-12-06 17:56:20.624668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:28.843 nvme0n1 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:28.843 17:56:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:28.843 17:56:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:28.843 17:56:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.843 17:56:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:28.843 17:56:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@25 -- # sn=820414494 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 820414494 == \8\2\0\4\1\4\4\9\4 ]] 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 820414494 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:29.104 17:56:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:29.104 Running I/O for 1 seconds... 00:40:30.488 24627.00 IOPS, 96.20 MiB/s 00:40:30.488 Latency(us) 00:40:30.488 [2024-12-06T16:56:22.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.488 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:30.488 nvme0n1 : 1.01 24627.11 96.20 0.00 0.00 5181.96 4341.76 9939.63 00:40:30.488 [2024-12-06T16:56:22.554Z] =================================================================================================================== 00:40:30.488 [2024-12-06T16:56:22.554Z] Total : 24627.11 96.20 0.00 0.00 5181.96 4341.76 9939.63 00:40:30.488 { 00:40:30.488 "results": [ 00:40:30.488 { 00:40:30.488 "job": "nvme0n1", 00:40:30.488 "core_mask": "0x2", 00:40:30.488 "workload": "randread", 00:40:30.488 "status": "finished", 00:40:30.488 "queue_depth": 128, 00:40:30.488 "io_size": 4096, 00:40:30.488 "runtime": 1.005193, 00:40:30.488 "iops": 24627.111410445556, 00:40:30.488 "mibps": 96.19965394705295, 00:40:30.488 "io_failed": 0, 00:40:30.488 "io_timeout": 0, 00:40:30.488 "avg_latency_us": 5181.960928028007, 00:40:30.488 "min_latency_us": 4341.76, 00:40:30.488 "max_latency_us": 9939.626666666667 00:40:30.488 } 00:40:30.488 ], 00:40:30.488 "core_count": 1 00:40:30.488 } 00:40:30.488 17:56:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:30.488 17:56:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:30.488 17:56:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:30.488 17:56:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:30.488 17:56:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:30.488 17:56:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:30.488 17:56:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:30.488 17:56:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:30.748 17:56:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:30.748 [2024-12-06 17:56:22.731761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:30.748 [2024-12-06 17:56:22.732203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee5620 (107): Transport endpoint is not connected 00:40:30.748 [2024-12-06 17:56:22.733199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee5620 (9): Bad file descriptor 00:40:30.748 [2024-12-06 17:56:22.734201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:30.748 [2024-12-06 17:56:22.734209] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:30.748 [2024-12-06 17:56:22.734215] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:30.748 [2024-12-06 17:56:22.734221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:30.748 request: 00:40:30.748 { 00:40:30.748 "name": "nvme0", 00:40:30.748 "trtype": "tcp", 00:40:30.748 "traddr": "127.0.0.1", 00:40:30.748 "adrfam": "ipv4", 00:40:30.748 "trsvcid": "4420", 00:40:30.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:30.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:30.748 "prchk_reftag": false, 00:40:30.748 "prchk_guard": false, 00:40:30.748 "hdgst": false, 00:40:30.748 "ddgst": false, 00:40:30.748 "psk": ":spdk-test:key1", 00:40:30.748 "allow_unrecognized_csi": false, 00:40:30.748 "method": "bdev_nvme_attach_controller", 00:40:30.748 "req_id": 1 00:40:30.748 } 00:40:30.748 Got JSON-RPC error response 00:40:30.748 response: 00:40:30.748 { 00:40:30.748 "code": -5, 00:40:30.748 "message": "Input/output error" 00:40:30.748 } 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@33 -- # sn=820414494 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 820414494 00:40:30.748 1 links removed 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@33 -- # sn=200570647 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 200570647 00:40:30.748 1 links removed 00:40:30.748 17:56:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1791554 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1791554 ']' 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1791554 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:30.748 17:56:22 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791554 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791554' 00:40:31.007 killing process with pid 1791554 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@973 -- # kill 1791554 00:40:31.007 Received shutdown signal, test time was about 1.000000 seconds 00:40:31.007 00:40:31.007 Latency(us) 00:40:31.007 [2024-12-06T16:56:23.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.007 [2024-12-06T16:56:23.073Z] =================================================================================================================== 00:40:31.007 [2024-12-06T16:56:23.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@978 -- # wait 1791554 00:40:31.007 17:56:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1791545 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1791545 ']' 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1791545 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791545 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791545' 00:40:31.007 killing process with pid 1791545 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@973 -- # kill 1791545 00:40:31.007 17:56:22 keyring_linux -- common/autotest_common.sh@978 -- # wait 1791545 00:40:31.266 00:40:31.266 real 0m5.201s 00:40:31.266 user 0m9.807s 00:40:31.266 sys 0m1.384s 00:40:31.266 17:56:23 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.266 17:56:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:31.266 ************************************ 00:40:31.266 END TEST keyring_linux 00:40:31.266 ************************************ 00:40:31.266 17:56:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:31.266 17:56:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:31.266 17:56:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:31.266 17:56:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:31.266 17:56:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:31.266 17:56:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:31.267 17:56:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:31.267 17:56:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:31.267 17:56:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:31.267 17:56:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:31.267 17:56:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:31.267 17:56:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:31.267 17:56:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:31.267 17:56:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:31.267 17:56:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:31.267 17:56:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:31.267 17:56:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:31.267 17:56:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:31.267 17:56:23 -- common/autotest_common.sh@10 -- # set +x 00:40:31.267 17:56:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:31.267 17:56:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:31.267 17:56:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:31.267 17:56:23 -- common/autotest_common.sh@10 -- # set +x 00:40:39.395 INFO: APP EXITING 00:40:39.395 INFO: killing all VMs 00:40:39.395 INFO: killing vhost app 00:40:39.395 INFO: EXIT DONE 00:40:41.940 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:41.940 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:41.940 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:42.201 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:46.402 Cleaning 00:40:46.402 Removing: /var/run/dpdk/spdk0/config 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:46.402 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:46.402 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:46.402 Removing: /var/run/dpdk/spdk1/config 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:46.402 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:46.402 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:46.402 Removing: /var/run/dpdk/spdk2/config 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:46.402 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:46.402 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:46.402 Removing: /var/run/dpdk/spdk3/config 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:46.402 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:46.402 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:46.402 Removing: /var/run/dpdk/spdk4/config 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:46.402 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:46.402 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:46.402 Removing: /dev/shm/bdev_svc_trace.1 00:40:46.402 Removing: /dev/shm/nvmf_trace.0 00:40:46.402 Removing: /dev/shm/spdk_tgt_trace.pid1473110 00:40:46.402 Removing: /var/run/dpdk/spdk0 00:40:46.402 Removing: /var/run/dpdk/spdk1 00:40:46.402 Removing: /var/run/dpdk/spdk2 00:40:46.402 Removing: /var/run/dpdk/spdk3 00:40:46.402 Removing: /var/run/dpdk/spdk4 00:40:46.402 Removing: /var/run/dpdk/spdk_pid1471619 00:40:46.402 Removing: /var/run/dpdk/spdk_pid1473110 00:40:46.402 Removing: /var/run/dpdk/spdk_pid1473957 00:40:46.402 Removing: /var/run/dpdk/spdk_pid1474996 00:40:46.402 Removing: /var/run/dpdk/spdk_pid1475336 00:40:46.402 Removing: /var/run/dpdk/spdk_pid1476410 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1476575 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1476873 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1478019 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1478752 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1479106 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1479451 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1479688 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1480087 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1480444 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1480681 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1480950 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1482252 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1485528 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1485889 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1486248 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1486320 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1486935 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1486975 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1487419 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1487579 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1487801 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1488056 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1488258 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1488432 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1488890 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1489236 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1489631 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1494180 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1499546 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1512212 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1512898 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1518149 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1518620 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1523719 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1530804 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1533901 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1546660 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1557583 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1560075 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1561264 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1582069 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1586932 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1642682 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1647504 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1650285 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653162 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653170 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653341 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653411 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653940 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653985 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1653990 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654013 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654023 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654032 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654097 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654157 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654226 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654279 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654281 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654309 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654492 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1654653 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1657460 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1663712 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666300 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666424 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666571 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666599 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666631 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666659 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666748 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1666905 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1667053 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1667132 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1667335 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1667420 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1667513 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1670069 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1673275 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1673276 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1673277 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1675787 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1681021 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1681773 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1684612 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1684857 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1685135 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1685404 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1687962 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1690590 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1693120 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1698716 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1698725 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1701271 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1701293 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1701312 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1701341 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1701346 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1703913 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1704110 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1706792 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1707011 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1709645 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1712440 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1715492 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1719707 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1719709 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1729333 00:40:46.403 Removing: /var/run/dpdk/spdk_pid1729386 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1729451 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1729508 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1729629 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1729693 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1729744 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1729809 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1732354 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1732382 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1735040 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1735099 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1738253 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1740892 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1744125 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1744166 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1746729 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1746766 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1749307 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1751963 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1752214 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1757574 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1763087 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1763207 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1763285 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1772197 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1774725 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1775091 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1777861 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1777866 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1781347 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1782129 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1782460 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1782710 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1783036 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1783305 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1787914 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1787950 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1787992 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1789056 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1789093 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1789129 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1791169 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1791186 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1791416 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1791545 00:40:46.664 Removing: /var/run/dpdk/spdk_pid1791554 00:40:46.664 Clean 00:40:46.925 17:56:38 -- common/autotest_common.sh@1453 -- # return 0 00:40:46.925 17:56:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:46.925 17:56:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.925 17:56:38 -- common/autotest_common.sh@10 -- # set +x 00:40:46.925 17:56:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:46.925 17:56:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.925 17:56:38 -- common/autotest_common.sh@10 -- # set +x 00:40:46.925 17:56:38 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:46.925 17:56:38 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:46.925 17:56:38 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:46.925 17:56:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:46.925 17:56:38 -- spdk/autotest.sh@398 -- # hostname 00:40:46.925 17:56:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:47.186 geninfo: WARNING: invalid characters removed from testname! 00:41:13.773 17:57:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:15.684 17:57:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:17.064 17:57:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:19.609 17:57:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:20.993 17:57:12 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:22.907 17:57:14 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:24.312 17:57:16 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:24.312 17:57:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:24.312 17:57:16 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:24.312 17:57:16 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:24.312 17:57:16 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:24.312 17:57:16 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:24.312 + [[ -n 1386197 ]] 00:41:24.312 + sudo kill 1386197 00:41:24.378 [Pipeline] } 00:41:24.401 [Pipeline] // stage 00:41:24.406 [Pipeline] } 00:41:24.420 [Pipeline] // timeout 00:41:24.425 [Pipeline] } 00:41:24.439 [Pipeline] // catchError 00:41:24.443 [Pipeline] } 00:41:24.457 [Pipeline] // wrap 00:41:24.462 [Pipeline] } 00:41:24.474 [Pipeline] // catchError 00:41:24.482 [Pipeline] stage 00:41:24.484 [Pipeline] { (Epilogue) 00:41:24.497 [Pipeline] catchError 00:41:24.498 [Pipeline] { 00:41:24.510 [Pipeline] echo 00:41:24.512 Cleanup processes 00:41:24.517 [Pipeline] sh 00:41:24.898 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:24.898 1801603 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:24.913 [Pipeline] sh 00:41:25.204 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:25.204 ++ grep -v 'sudo pgrep' 00:41:25.204 ++ awk '{print $1}' 00:41:25.204 + sudo kill -9 00:41:25.204 + true 00:41:25.218 [Pipeline] sh 00:41:25.512 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:35.521 [Pipeline] sh 00:41:35.810 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:35.810 Artifacts sizes are good 00:41:35.826 [Pipeline] archiveArtifacts 00:41:35.833 Archiving artifacts 00:41:35.964 [Pipeline] sh 00:41:36.253 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:36.269 [Pipeline] cleanWs 00:41:36.279 [WS-CLEANUP] Deleting project workspace... 00:41:36.279 [WS-CLEANUP] Deferred wipeout is used... 00:41:36.286 [WS-CLEANUP] done 00:41:36.288 [Pipeline] } 00:41:36.305 [Pipeline] // catchError 00:41:36.316 [Pipeline] sh 00:41:36.605 + logger -p user.info -t JENKINS-CI 00:41:36.617 [Pipeline] } 00:41:36.634 [Pipeline] // stage 00:41:36.639 [Pipeline] } 00:41:36.652 [Pipeline] // node 00:41:36.657 [Pipeline] End of Pipeline 00:41:36.709 Finished: SUCCESS